Nothing Special   »   [go: up one dir, main page]

CN110765170A - User portrait generation method and wearable device - Google Patents

User portrait generation method and wearable device Download PDF

Info

Publication number
CN110765170A
CN110765170A CN201910920337.7A CN201910920337A CN110765170A CN 110765170 A CN110765170 A CN 110765170A CN 201910920337 A CN201910920337 A CN 201910920337A CN 110765170 A CN110765170 A CN 110765170A
Authority
CN
China
Prior art keywords
user
weight coefficient
preset
portrait
wearable device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910920337.7A
Other languages
Chinese (zh)
Inventor
于雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910920337.7A priority Critical patent/CN110765170A/en
Publication of CN110765170A publication Critical patent/CN110765170A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a user portrait generation method and wearable equipment. The method is applied to a wearable device and comprises the following steps: receiving a user representation generation operation; responding to the user portrait generation operation, acquiring physiological parameters of a user, and acquiring user behavior data of the user at a mobile terminal, wherein the wearable device is in communication connection with the mobile terminal; and generating a user portrait of the user according to the physiological parameters and the user behavior data. The embodiment of the invention solves the problem that in the prior art, the wearable equipment mainly depends on the manual input of a user to obtain the user information.

Description

User portrait generation method and wearable device
Technical Field
The invention relates to the technical field of mobile communication, in particular to a user portrait generation method and wearable equipment.
Background
With the rapid development of mobile communication technology, various wearable devices gradually enter people's lives. The function of wearable equipment is also perfect gradually, not only is a hardware equipment, realizes various functions through modes such as software support and data interaction, high in the clouds interaction more, has brought great transformation to user's life, perception.
Under the background of the era of big data, various information of users is filled in a network, each concrete information of the users is abstracted into labels, and the labels are utilized to embody the user image to obtain concrete user portrait, so that targeted services are provided for the users based on the user portrait.
At present, wearable equipment mainly relies on manual input of a user to obtain user information, however, the wearable equipment is usually small in size and inconvenient to operate when the user inputs the user information; and because the display interface of wearable equipment is less, can set up multistage menu, lead to the input operation more complicated.
Disclosure of Invention
The embodiment of the invention provides a user portrait generation method and wearable equipment, and aims to solve the problem that in the prior art, the wearable equipment mainly relies on manual input of a user to obtain user information.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for generating a user portrait, which is applied to a wearable device, and the method includes:
receiving a user representation generation operation;
responding to the user portrait generation operation, acquiring physiological parameters of a user, and acquiring user behavior data of the user at a mobile terminal, wherein the wearable device is in communication connection with the mobile terminal;
and generating a user portrait of the user according to the physiological parameters and the user behavior data.
In a second aspect, an embodiment of the present invention further provides a wearable device, where the wearable device includes:
the operation receiving module is used for receiving user portrait generation operation;
the data acquisition module is used for responding to the user portrait generation operation, acquiring physiological parameters of a user and acquiring user behavior data of the user at a mobile terminal, wherein the wearable device is in communication connection with the mobile terminal;
and the portrait generation module is used for generating the user portrait of the user according to the physiological parameters and the user behavior data.
In a third aspect, an embodiment of the present invention further provides a wearable device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps in the user representation generation method as described above when executing the computer program.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the steps in the user representation generating method described above.
In an embodiment of the invention, a user representation generation operation is received; responding to the user portrait generation operation, acquiring physiological parameters of a user, and acquiring user behavior data of the user at a mobile terminal, wherein the wearable device is in communication connection with the mobile terminal; according to the physiological parameters and the user behavior data, the user portrait of the user is generated, the preference of the user is determined based on the user behavior data, the physiological state of the user is determined based on the physiological parameters, the user portrait comprehensively representing the preference and the physiological state of the user is finally obtained, the user does not need to manually input various user information, the user is prevented from manually inputting various user information in wearable equipment, and the operation is inconvenient and tedious.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart of a method for generating a user representation according to an embodiment of the present invention;
FIG. 2 is a second flowchart of a method for generating a user representation according to an embodiment of the present invention;
FIG. 3 shows a flow chart of a second example of embodiment of the invention;
FIG. 4 shows one of the schematic diagrams of a second example of embodiment of the invention;
FIG. 5 shows a second schematic diagram of a second example of an embodiment of the invention;
FIG. 6 shows a third schematic diagram of a second example of an embodiment of the invention;
FIG. 7 shows a fourth schematic representation of a second example of embodiment of the invention;
FIG. 8 shows a fifth schematic view of a second example of an embodiment of the present invention;
FIG. 9 shows a sixth schematic view of a second example of an embodiment of the present invention;
fig. 10 shows a block diagram of a wearable device provided by an embodiment of the invention;
fig. 11 shows a block diagram of another wearable device provided by an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In the embodiment of the invention, the wearable device includes but is not limited to a smart bracelet, smart glasses, a smart watch and the like, and the mobile terminal includes but is not limited to a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device and a pedometer.
Referring to fig. 1, an embodiment of the present invention provides a method for generating a user portrait, which is applied to a wearable device, and the method includes:
step 101, receiving a user representation generation operation.
The user portrait is also called a user role and is used as an effective tool for delineating a target user and connecting user appeal and design direction, and a service provider can provide targeted service for the user based on the user portrait; user representation generation operations for generating a user representation, which may include activation operations of the wearable device, such as the wearable device may activate the user representation generation operations upon each activation (system power up);
the user representation generation operation may also include receiving a user representation generation instruction, such as a user actively triggering the representation generation instruction.
And 102, responding to the user portrait generation operation, acquiring physiological parameters of a user, and acquiring user behavior data of the user at a mobile terminal, wherein the wearable device is in communication connection with the mobile terminal.
The physiological parameters of the user comprise parameters such as body temperature, heart rate, iris and/or skin state and the like; the physiological parameters can be used for judging various physiological state information such as age, sex, body fat and the like of the user when constructing the user portrait.
And the wearable equipment responds to the user portrait generation operation, acquires the physiological parameters of the user and acquires the user behavior data of the user at the mobile terminal.
Specifically, when the wearable device acquires the physiological parameters of the user, the wearable device can recognize the parameters of the user, such as body temperature, heart rate and/or skin state, through a preset chip in contact with the skin of the user, and the iris information of the user can be acquired through photographing. The wearable device can be in communication connection with the mobile terminal, user behavior data of a user in the mobile terminal is obtained, the user behavior data comprises relevant records of using behaviors of the user when the user uses the mobile terminal, and the user preference and other information can be predicted through the user behavior data.
Optionally, the wearable device may be in communication connection with the mobile terminal through a hard wire, or may be in communication connection with the mobile terminal in a wireless manner through a network, and the mobile terminal is requested to send the user behavior data to the wearable device.
It can be understood that in the user representation generation method in the embodiment of the present invention, the user representation generation operation may be triggered by the user, or may be triggered by the wearable device itself, for example, each time the device is powered on, or the wearable device periodically triggers the generation operation according to a preset update period; correspondingly, each time the user portrait generation operation is responded, the acquired user behavior data can be the user behavior data counted in each preset counting period, and can also be the user behavior data in the time period from the last execution of the user portrait generation operation to the current time; that is, the data collection time range may be set by the user, and the embodiment of the present invention is not limited herein.
Step 103, generating a user portrait of the user according to the physiological parameters and the user behavior data.
After the physiological parameters and the user behavior data of the user are obtained, a user portrait of the user is generated together based on the physiological parameters and the user behavior data; optionally, the user portrait may be in a vector form, where the user portrait includes at least two dimensions, each dimension corresponds to a preset tag, a weight coefficient of the user portrait in the dimension is a weight coefficient of the user in the preset tag, and the weight coefficient represents a feature value of a feature of the user corresponding to the preset tag; each preset label is used as a part of the user portrait, after the weight coefficients of the preset labels are obtained, the user portrait of the user is finally obtained, the user portrait represents the preference and the physiological state of the user, the preference is determined based on the user behavior data, and the physiological state is determined based on the physiological parameters. For example, a user representation of a user is:
Figure BDA0002217358540000051
the weighting factors of the user under the preset labels a 1-a 6 are 4, 7, -8, 14, 20, 19; if the preset label corresponding to a1 is like to watch video, the like degree of the user is 4.
In embodiments of the invention, operations are generated by receiving a user representation; responding to the user portrait generation operation, acquiring physiological parameters of a user, and acquiring user behavior data of the user at a mobile terminal, wherein the wearable device is in communication connection with the mobile terminal; generating a user portrait of the user according to the physiological parameters and the user behavior data, determining the preference of the user based on the user behavior data, determining the physiological state of the user based on the physiological parameters, and finally obtaining the user portrait comprehensively representing the preference and the physiological state of the user, namely automatically generating the user portrait without manually inputting various user information by the user, so that the inconvenience and the complexity in operation caused by the fact that the user manually inputs various user information in the wearable equipment are avoided; the embodiment of the invention solves the problem that in the prior art, the wearable equipment mainly depends on the manual input of a user to obtain the user information.
Optionally, in this embodiment of the present invention, in a case that the user representation generating operation is a starting operation of the wearable device, after step 103, the method further includes:
acquiring a target portrait with the closest similarity to the user portrait from a preset portrait database;
acquiring preset starting configuration information corresponding to the target image;
and executing the starting operation according to the preset starting configuration information.
The wearable device is preset with an image database, the image database comprises a large number of sample images, and optionally, the sample images are generated according to psychological parameters and user behavior data collected from a plurality of users; the preset starting configuration information of each sample portrait is recorded in the portrait database, the preset starting configuration information contains starting configuration parameters of the user, and the starting configuration parameters are requirements of the user for starting the wearable device, such as various preferences of the user; when the starting is executed by starting the configuration parameters, the wearable equipment is automatically configured according to the preference of the user, and the user does not need to manually input various requirements.
After the wearable device obtains a user image of a user to be configured, a target image with the similarity closest to the user image is determined according to a preset similarity algorithm, and starting configuration information of the wearable device is configured according to the target image, so that various parameters of the wearable device after being started meet the requirements of the user. Alternatively, the similarity algorithm may be a cosine similarity, pearson correlation coefficient similarity, or euclidean similarity, or the like algorithm.
In addition, after the user representation of the user is obtained, other operations may be performed based on the user representation, such as applications in the medical field.
Referring to fig. 2, another embodiment of the present invention provides a method for generating a user portrait, which is applied to a wearable device, and the method includes:
step 201, receiving a user portrait generation operation.
The user portrait is also called a user role and is used as an effective tool for delineating a target user and connecting user appeal and design direction; user representation generation operations for generating a user representation, which may include activation operations of the wearable device, such as the wearable device may activate the user representation generation operations upon each activation (system power up);
the user representation generation operation may also include receiving a user representation generation instruction, such as a user actively triggering the representation generation instruction.
Step 202, responding to the user portrait generation operation, acquiring physiological parameters of a user, and acquiring user behavior data of the user at a mobile terminal, wherein the wearable device is in communication connection with the mobile terminal.
The physiological parameters of the user comprise parameters such as body temperature, heart rate, iris and/or skin state and the like; the physiological parameters can be used for judging various physiological state information such as age, sex, body fat and the like of the user when constructing the user portrait.
And the wearable equipment responds to the user portrait generation operation, acquires the physiological parameters of the user and acquires the user behavior data of the user at the mobile terminal.
Specifically, when the wearable device acquires the physiological parameters of the user, the wearable device can recognize the parameters of the user, such as body temperature, heart rate and/or skin state, through a preset chip in contact with the skin of the user, and the iris information of the user can be acquired through photographing. The wearable device can be in communication connection with the mobile terminal, user behavior data of a user in the mobile terminal, relevant records of the use behavior of the user behavior data when the user uses the mobile terminal, and information such as preference of the user can be predicted through the user behavior data.
Optionally, the wearable device may be in communication connection with the mobile terminal through a hard wire, or may be in communication connection with the mobile terminal in a wireless manner through a network, and the mobile terminal is requested to send the user behavior data to the wearable device.
Step 203, determining a first weighting coefficient corresponding to each physiological parameter according to a first corresponding relationship between the physiological parameter and the weighting coefficient.
The first corresponding relation is the corresponding relation between the specific value of the physiological parameter and a weight coefficient, and the first weight coefficient is used for indicating the specific value or the specific range of the physiological parameter of the user; when each physiological parameter of the user is obtained, a first weight coefficient corresponding to each physiological parameter is searched according to a first corresponding relation between the physiological parameter and the weight coefficient.
As a first example, taking the age in the physiological parameter as an example, in the first corresponding relationship, the first weight coefficient corresponding to each age group is shown in the following table 1:
table 1:
age parameter First weight coefficient
From 0 to 25 years old 11
26 to 32 years old 12
Age 33 to 50 13
Age 51 and older 14
Taking the age in the physiological parameters as an example, in the first correspondence relationship, when the sex is male, the corresponding first weight coefficient is 15, and when the sex is female, the corresponding first weight coefficient is 16;
and 204, performing data processing on the user behavior data according to a preset data processing rule to obtain a second weight coefficient corresponding to each preset label.
The preset tag comprises a target behavior in the user behavior data, the user behavior data is a record related to a use behavior of the user when the user uses the mobile terminal, for example, operation data in a preset application program, such as specific operations of browsing, praise and the like, and information such as user preference and the like can be predicted through the user behavior data.
After the user behavior data are obtained, performing data processing on the behavior data to obtain a second weight coefficient corresponding to the preset label; for example, if the preset tag is a certain type of news, the second weight coefficient represents the attitude of the user to the type of news, and the attitude can be positive feedback, such as liking; negative feedback, such as countering, is also possible.
Step 205, determining the dimension corresponding to each of the first weight coefficient and the second weight coefficient according to the second corresponding relationship between the weight coefficient and the dimension, so as to obtain the user portrait of the user.
Wherein the user representation includes at least two dimensions and a weight coefficient for each of the at least two dimensions.
And the second corresponding relation is the corresponding relation between the dimension and the weight coefficient, after the first weight coefficient and the second weight coefficient are obtained, the dimension corresponding to each weight coefficient is determined, and the weight coefficient is correspondingly filled in the dimension to which the weight coefficient belongs, so that the user portrait of the user is obtained.
Optionally, in an embodiment of the present invention, step 204 includes:
the method comprises the steps that firstly, target behaviors contained in preset labels are determined for each preset label, and the times of the target behaviors in user behavior data are obtained;
and secondly, obtaining a second weight coefficient corresponding to the preset label according to the times of the target behaviors and a preset data processing formula.
In the first step, for each preset tag, firstly, determining a target behavior included in the preset tag, wherein the target behavior includes browsing, praise, forwarding and the like; counting the times of each target behavior in the user behavior data; and then, in the second step, a second weight coefficient corresponding to each preset label is obtained based on the times of the target behaviors and a preset data processing formula.
Specifically, in the embodiment of the present invention, the preset tag is a first tag, and the first tag includes a first target behavior, a second target behavior, and a third target behavior;
the second step includes:
inputting the times of the target behaviors into the following first formula to obtain a second weight coefficient corresponding to the preset label:
the first formula:
Figure BDA0002217358540000091
wherein P is the second weight coefficient, N1 is the number of times of the first target behavior, N2 is the number of times of the second target behavior, and N3 is the number of times of the third target behavior; m1 is data in a first preset range of values.
Wherein int is a rounding function; when the preset tag is a first tag, the first tag comprises a first target behavior, a second target behavior and a third target behavior; the specific types of the three target behaviors can be set by self; after the times N1-N3 of the three target behaviors are obtained, a second weight coefficient is obtained according to an integer function in a first formula; m1 is data in a first predetermined range of values, which may be (1, 10), etc., or other ranges of values, such as: m1 is 3.
As an example, the first formula may be used to calculate a weight coefficient of the forward preference, and when the preset tag is the forward preference, the three target behaviors may be appearance, click, and like in sequence; for the forward preference, the more times the target behavior appears, the smaller the weight coefficient is, and the preference degree is high.
On the other hand, in the embodiment of the present invention, the preset tag is a second tag, and the second tag includes a fourth target behavior and a fifth target behavior;
obtaining a second weight coefficient corresponding to the preset tag according to the number of times of the target behavior and a preset data processing formula, wherein the obtaining of the second weight coefficient comprises:
inputting the times of the target behaviors into the following second formula to obtain a second weight coefficient corresponding to the preset label:
the second formula:
Figure BDA0002217358540000092
wherein P is the second weight coefficient, N4 is the number of the fourth target behavior, and N5 is the number of the fifth target behavior; m2 is data in a second predetermined range of values.
When the preset tag is a second tag, the first tag comprises a fourth target behavior and a fifth target behavior; the specific type of the target behavior can be set by self; after the frequency of each target behavior is obtained, a second weight coefficient is obtained according to an integer function in a second formula; the second predetermined range of values may be (3, 30), etc., or other ranges of values, such as: m2 is 9.
It should be noted that a multiple relationship exists between M1 and M2.
As an example, the second formula may be used to calculate a weight coefficient of a negative preference, and when the preset tag is the negative preference, the two target behaviors may be the negative feedback times of the related information, the negative feedback times of the related video, and the like, respectively; for negative preference, the more times the target behavior appears, the smaller the weight coefficient is, and the high aversion degree is represented.
As a second example, fig. 3 illustrates an example of a wearable device initiating a user portrait generation operation, and describes a user portrait generation method provided in an embodiment of the present invention, which mainly includes the following steps:
step 301, the wearable device starts.
And step 302, connecting with the mobile terminal to acquire user behavior data.
Referring to fig. 4, fig. 4 shows that the wearable device is an example of a smart watch, when the user opens the smart watch, the display interface will remind the user that the user is connecting to a mobile phone, and fig. 5 shows that the smart watch obtains user behavior data to provide better service. In addition, if the user cancels the connection of the mobile phone, the cold start option is skipped, and the information is collected by inputting the related information by the user, which is not described herein again.
Step 303, determining whether the user wears:
if not, executing step 304, prompting the user to wear, and executing step 305 after the wearing is finished;
if yes, go to step 305 to obtain the user physiological parameters;
in step 303, whether the user wears the intelligent watch dial plate or not can be judged through the sensor on the back of the intelligent watch dial plate, if the user wears the intelligent watch dial plate, step 305 is carried out, and the physiological parameters of the user are collected; as shown in fig. 6, if not worn, the user is prompted to wear, and jumps to step 305 after the wearing is completed.
In step 305, in a wearing state, the watch rapidly acquires physiological parameters (age, sex, iris, skin moisture, etc.) through devices such as a sensor and a camera, and forms the physiological parameters (parameter value fields 11 to 23) of the user after acquisition, as shown in the following table 2:
table 2:
physiological parameter First weight coefficient
Age (age) 11
Sex 15
Parameters of iris 19
…… ……
The physiological parameters mainly refer to the corresponding relation between the physiological information and the first weight coefficient, and the value domain of the first weight coefficient and the value domain of the second weight coefficient are not intersected; meanwhile, the number of integers corresponds to the number of physiological parameters, 4 types of physiological parameters are set in the example, 13 numbers are needed in total to correspond to the physiological parameters, and 11-23 are selected as value ranges after the second weight coefficient value is avoided.
The physiological parameters can be defined by self, the rule is that the physiological parameters of the same class are continuous integers, the consistency of vectors is ensured, and the matching accuracy is ensured when the similarity is calculated subsequently.
For example, the correspondence between the iris parameters, the size of the iris-intestine loop and the first weight coefficient is shown in table 3 below:
table 3:
iris intestine ring First weight coefficient
Greater than standard intestine ring 17
Less than standard intestine ring 18
Meet the standard intestinal canal 19
Further, taking the skin moisture status as an example, the correspondence between the skin moisture status and the first weight coefficient is shown in the following table 4:
table 4:
moisture state of skin First weight coefficient
Less than 40% 20
40%-45% 21
46%-50% 22
Over 50 percent 23
Step 306, generating a user portrait;
in the process of generating the user portrait, on one hand, physiological parameters need to be determined to be first weight coefficients corresponding to the physiological parameters, and on the other hand, data processing is performed on user behavior data to obtain second weight coefficients.
And then generating a user portrait of the user according to the first weight coefficient and the second weight coefficient according to a preset sequence according to a second corresponding relation between the weight coefficient and the dimensionality.
Step 307, matching the sample image in the image database to obtain a target image.
Wherein, if the user portrait of the user is:
Figure BDA0002217358540000121
storing vectors in a picture database
Figure BDA0002217358540000122
And (3) calculating and matching vector similarity, taking cosine similarity as an example, and calculating a similarity coefficient T according to the following third formula:
Figure BDA0002217358540000123
the cosine similarity T is calculated according to the third formula, and the closer T is to 0, the more similar the two vectors are.
And 308, taking preset starting configuration information of the target portrait as configuration information of the current starting, and starting the wearable equipment.
After the target portrait is determined, starting the wearable equipment according to preset starting configuration information of the target portrait; for example, a 55 year old man, loving Suzhou drama, using a wristwatch for the first time, and starting up, obtains a first weight coefficient table as shown in the following Table 5:
table 5:
physiological parameter First weight coefficient
The aged (age 55) 14
Moisture of skin 20
Parameters of iris 19
…… ……
And a second weight coefficient table as shown in table 6:
table 6:
Figure BDA0002217358540000124
Figure BDA0002217358540000131
generating a user representation
Figure BDA0002217358540000132
After the database is compared, the sum vector
Figure BDA0002217358540000133
Is closest, then according to
Figure BDA0002217358540000134
The preset starting configuration information starts the smart watch, and the font display defaults to a large font, as shown in fig. 7; the desktop recommends the installation of a drama app, as shown in fig. 8; the back dial in contact with the skin was adjusted to a dry skin mode, as shown in fig. 9.
In embodiments of the invention, operations are generated by receiving a user representation; responding to the user portrait generation operation, acquiring physiological parameters of a user, and acquiring user behavior data of the user at a mobile terminal, wherein the wearable device is in communication connection with the mobile terminal; according to the physiological parameters and the user behavior data, the user portrait of the user is generated, the preference of the user is determined based on the user behavior data, the physiological state of the user is determined based on the physiological parameters, and finally the user portrait comprehensively representing the preference and the physiological state of the user is obtained, namely the user portrait is automatically generated, the user does not need to manually input various user information, the user is prevented from manually inputting various user information in wearable equipment, and the operation is inconvenient and tedious.
With the above description of the method for generating a user portrait according to the embodiment of the present invention, a wearable device according to the embodiment of the present invention will be described with reference to the accompanying drawings.
Referring to fig. 10, an embodiment of the present invention further provides a wearable device 1000, including:
an operation receiving module 1001 is used for receiving user portrait generation operation.
A data obtaining module 1002, configured to obtain a physiological parameter of a user in response to the user portrait generating operation, and receive user behavior data sent by a mobile terminal in communication connection with the wearable device 1000.
A representation generating module 1003, configured to generate a user representation of the user according to the physiological parameter and the user behavior data.
Optionally, in an embodiment of the present invention, the user representation includes at least two dimensions and a weight coefficient of each of the at least two dimensions;
the representation generation module 1003 includes:
the first generation submodule is used for determining a first weight coefficient corresponding to each physiological parameter according to a first corresponding relation between the physiological parameter and the weight coefficient;
the second generation submodule is used for carrying out data processing on the user behavior data according to a preset data processing rule to obtain a second weight coefficient corresponding to each preset label, wherein the preset label comprises a target behavior in the user behavior data;
and the vector generation submodule is used for determining the dimension corresponding to each first weight coefficient and each second weight coefficient according to the second corresponding relation between the weight coefficients and the dimensions to obtain the user portrait of the user.
Optionally, in this embodiment of the present invention, the second generating sub-module includes:
the data acquisition unit is used for determining target behaviors contained in each preset label and acquiring the times of the target behaviors in the user behavior data;
and the data processing unit is used for obtaining a second weight coefficient corresponding to the preset label according to the times of the target behaviors and a preset data processing formula.
Optionally, in the embodiment of the present invention, the preset tag is a first tag, and the first tag includes a first target behavior, a second target behavior, and a third target behavior;
the data processing unit is configured to:
inputting the times of the target behaviors into the following first formula to obtain a second weight coefficient corresponding to the preset label:
Figure BDA0002217358540000141
wherein P is the second weight coefficient, N1 is the number of times of the first target behavior, N2 is the number of times of the second target behavior, and N3 is the number of times of the third target behavior; m1 is data in a first preset range of values.
Optionally, in the embodiment of the present invention, the preset tag is a second tag, and the second tag includes a fourth target behavior and a fifth target behavior;
the data processing unit is configured to:
inputting the times of the target behaviors into the following second formula to obtain a second weight coefficient corresponding to the preset label:
Figure BDA0002217358540000151
wherein P is the second weight coefficient, N4 is the number of the fourth target behavior, and N5 is the number of the fifth target behavior; m2 is data in a second predetermined range of values.
Optionally, in this embodiment of the present invention, in a case that the user representation generating operation is a starting operation of the wearable device 1000, the wearable device 1000 further includes:
the starting processing module is used for acquiring a target portrait with the closest similarity to the user portrait from a preset portrait database;
acquiring preset starting configuration information corresponding to the target image;
and executing the starting operation according to the preset starting configuration information.
The wearable device 1000 provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 9, and is not described here again to avoid repetition.
In an embodiment of the present invention, the operation receiving module 1001 receives a user portrait generation operation; the data acquisition module 1002 is used for responding to the user portrait generation operation, acquiring physiological parameters of a user, and receiving user behavior data sent by a mobile terminal in communication connection with the wearable device 1000; the portrait generation module 1003 generates a user portrait of the user according to the physiological parameters and the user behavior data, determines the preference of the user based on the user behavior data, determines the physiological state of the user based on the physiological parameters, and finally obtains the user portrait comprehensively representing the preference and the physiological state of the user, so that the user does not need to manually input various user information, and the user is prevented from manually inputting various user information in the wearable device 1000, and the operation is inconvenient and complicated.
Fig. 11 is a schematic hardware structure diagram of a wearable device implementing various embodiments of the present invention;
the wearable device 1100 includes, but is not limited to: radio frequency unit 1101, network module 1102, audio output unit 1103, input unit 1104, sensor 1105, display unit 1106, user input unit 1107, interface unit 1108, memory 1109, processor 1110, and power supply 1111. Those skilled in the art will appreciate that the wearable device structure shown in fig. 11 does not constitute a limitation of the wearable device, and that the wearable device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, and a pedometer, and the wearable device includes, but is not limited to, a smart bracelet, smart glasses, a smart watch, and the like.
The radio frequency unit 1101 is configured to receive a user portrait generation operation;
a processor 1110 configured to obtain a physiological parameter of a user in response to the user representation generating operation, and obtain user behavior data of the user at a mobile terminal, where the wearable device is in communication connection with the mobile terminal;
and generating a user portrait of the user according to the physiological parameters and the user behavior data.
In an embodiment of the invention, a user representation generation operation is received; responding to the user portrait generation operation, acquiring physiological parameters of a user, and acquiring user behavior data of the user at a mobile terminal, wherein the wearable device is in communication connection with the mobile terminal; according to the physiological parameters and the user behavior data, the user portrait of the user is generated, the preference of the user is determined based on the user behavior data, the physiological state of the user is determined based on the physiological parameters, the user portrait comprehensively representing the preference and the physiological state of the user is finally obtained, the user does not need to manually input various user information, the user is prevented from manually inputting various user information in wearable equipment, and the operation is inconvenient and tedious.
It should be noted that, in this embodiment, the wearable device 1100 may implement each process in the method embodiment of the present invention and achieve the same beneficial effects, and for avoiding repetition, details are not described here.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1101 may be configured to receive and transmit signals during a message transmission or a call, and specifically, receive downlink data from a base station and then process the received downlink data to the processor 1110; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 1101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 1101 may also communicate with a network and other devices through a wireless communication system.
The wearable device provides wireless broadband internet access to the user through the network module 1102, such as to assist the user in emailing, browsing web pages, and accessing streaming media.
The audio output unit 1103 may convert audio data received by the radio frequency unit 1101 or the network module 1102 or stored in the memory 1109 into an audio signal and output as sound. Also, the audio output unit 1103 may also provide audio output related to a specific function performed by the wearable device 1100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1104 is used to receive audio or video signals. The input Unit 1104 may include a Graphics Processing Unit (GPU) 11041 and a microphone 11042, and the Graphics processor 11041 processes image data of still pictures or video obtained by an image capturing device, such as a camera, in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 1106. The image frames processed by the graphic processor 11041 may be stored in the memory 1109 (or other storage medium) or transmitted via the radio frequency unit 1101 or the network module 1102. The microphone 11042 may receive sound and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1101 in case of the phone call mode.
The wearable device 1100 also includes at least one sensor 1105, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 11061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 11061 and/or a backlight when the wearable device 1100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the wearable device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., and will not be described in detail herein.
The display unit 1106 is used to display information input by a user or information provided to the user. The Display unit 1106 may include a Display panel 11061, and the Display panel 11061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the wearable device. Specifically, the user input unit 1107 includes a touch panel 11071 and other input devices 11072. The touch panel 11071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 11071 (e.g., operations by a user on or near the touch panel 11071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 11071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1110, and receives and executes commands sent from the processor 1110. In addition, the touch panel 11071 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 1107 may include other input devices 11072 in addition to the touch panel 11071. In particular, the other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 11071 can be overlaid on the display panel 11061, and when the touch panel 11071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1110 to determine the type of the touch event, and then the processor 1110 provides a corresponding visual output on the display panel 11061 according to the type of the touch event. Although in fig. 11, the touch panel 11071 and the display panel 11061 are two independent components to implement the input and output functions of the wearable device, in some embodiments, the touch panel 11071 and the display panel 11061 may be integrated to implement the input and output functions of the wearable device, and is not limited herein.
The interface unit 1108 is an interface through which an external device is connected to the wearable apparatus 1100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the wearable apparatus 1100 or may be used to transmit data between the wearable apparatus 1100 and an external device.
The memory 1109 may be used to store software programs as well as various data. The memory 1109 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory 1109 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1110 is a control center of the wearable device, connects various parts of the entire wearable device using various interfaces and lines, and performs various functions of the wearable device and processes data by running or executing software programs and/or modules stored in the memory 1109 and calling data stored in the memory 1109, thereby performing overall monitoring of the wearable device. Processor 1110 may include one or more processing units; preferably, the processor 1110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1110.
The wearable device 1100 may further include a power source 1111 (e.g., a battery) for supplying power to various components, and preferably, the power source 1111 may be logically connected to the processor 1110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
In addition, the wearable device 1100 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a wearable device, which includes a processor 1110, a memory 1109, and a computer program stored in the memory 1109 and capable of running on the processor 1110, where the computer program, when executed by the processor 1110, implements each process of the above-described user portrait generation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned user image generation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A user portrait generation method applied to a wearable device is characterized by comprising the following steps:
receiving a user representation generation operation;
responding to the user portrait generation operation, acquiring physiological parameters of a user, and acquiring user behavior data of the user at a mobile terminal, wherein the wearable device is in communication connection with the mobile terminal;
and generating a user portrait of the user according to the physiological parameters and the user behavior data.
2. A user representation generation method as claimed in claim 1, wherein the user representation comprises at least two dimensions and a weighting factor for each of the at least two dimensions;
the generating a user representation of the user, comprising:
determining a first weight coefficient corresponding to each physiological parameter according to a first corresponding relation between the physiological parameter and the weight coefficient;
according to a preset data processing rule, performing data processing on the user behavior data to obtain a second weight coefficient corresponding to each preset label, wherein the preset label comprises a target behavior in the user behavior data;
and determining the dimension corresponding to each first weight coefficient and each second weight coefficient according to a second corresponding relation between the weight coefficients and the dimensions to obtain the user portrait of the user.
3. The method for generating a user portrait according to claim 2, wherein the performing data processing on the user behavior data according to a preset data processing rule to obtain a second weight coefficient corresponding to each preset tag includes:
for each preset label, determining a target behavior contained in the preset label, and acquiring the times of the target behavior in the user behavior data;
and obtaining a second weight coefficient corresponding to the preset label according to the times of the target behaviors and a preset data processing formula.
4. The method of claim 3, wherein the predetermined tag is a first tag, and the first tag comprises a first target behavior, a second target behavior, and a third target behavior;
obtaining a second weight coefficient corresponding to the preset tag according to the number of times of the target behavior and a preset data processing formula, wherein the obtaining of the second weight coefficient comprises:
inputting the times of the target behaviors into the following first formula to obtain a second weight coefficient corresponding to the preset label:
Figure FDA0002217358530000021
wherein P is the second weight coefficient, N1 is the number of times of the first target behavior, N2 is the number of times of the second target behavior, and N3 is the number of times of the third target behavior; m1 is data in a first preset range of values.
5. The method of claim 3, wherein the predetermined tag is a second tag, and the second tag comprises a fourth target behavior and a fifth target behavior;
obtaining a second weight coefficient corresponding to the preset tag according to the number of times of the target behavior and a preset data processing formula, wherein the obtaining of the second weight coefficient comprises:
inputting the times of the target behaviors into the following second formula to obtain a second weight coefficient corresponding to the preset label:
Figure FDA0002217358530000022
wherein P is the second weight coefficient, N4 is the number of the fourth target behavior, and N5 is the number of the fifth target behavior; m2 is data in a second predetermined range of values.
6. The method of user representation generation as claimed in claim 1, wherein in the event that the user representation generation operation is a start operation of the wearable device, the method further comprises, after the generating a user representation of the user:
acquiring a target portrait with the closest similarity to the user portrait from a preset portrait database;
acquiring preset starting configuration information corresponding to the target image;
and executing the starting operation according to the preset starting configuration information.
7. A wearable device, comprising:
the operation receiving module is used for receiving user portrait generation operation;
the data acquisition module is used for responding to the user portrait generation operation, acquiring physiological parameters of a user and acquiring user behavior data of the user at a mobile terminal, wherein the wearable device is in communication connection with the mobile terminal;
and the portrait generation module is used for generating the user portrait of the user according to the physiological parameters and the user behavior data.
8. The wearable device of claim 7, wherein the user representation includes at least two dimensions and a weight coefficient for each of the at least two dimensions;
the representation generation module includes:
the first generation submodule is used for determining a first weight coefficient corresponding to each physiological parameter according to a first corresponding relation between the physiological parameter and the weight coefficient;
the second generation submodule is used for carrying out data processing on the user behavior data according to a preset data processing rule to obtain a second weight coefficient corresponding to each preset label, wherein the preset label comprises a target behavior in the user behavior data;
and the vector generation submodule is used for determining the dimension corresponding to each first weight coefficient and each second weight coefficient according to the second corresponding relation between the weight coefficients and the dimensions to obtain the user portrait of the user.
9. A wearable device comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the user representation generation method of any of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of generating a user representation as claimed in any one of claims 1 to 6.
CN201910920337.7A 2019-09-26 2019-09-26 User portrait generation method and wearable device Pending CN110765170A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910920337.7A CN110765170A (en) 2019-09-26 2019-09-26 User portrait generation method and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910920337.7A CN110765170A (en) 2019-09-26 2019-09-26 User portrait generation method and wearable device

Publications (1)

Publication Number Publication Date
CN110765170A true CN110765170A (en) 2020-02-07

Family

ID=69330598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910920337.7A Pending CN110765170A (en) 2019-09-26 2019-09-26 User portrait generation method and wearable device

Country Status (1)

Country Link
CN (1) CN110765170A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582933A (en) * 2020-05-07 2020-08-25 北京点众科技股份有限公司 Method, terminal and storage medium for perfecting user portrait based on purchasing electronic book
CN111768828A (en) * 2020-09-03 2020-10-13 成都索贝数码科技股份有限公司 Patient sign portrait construction system and method based on data inside and outside hospital
WO2022111071A1 (en) * 2020-11-25 2022-06-02 Oppo广东移动通信有限公司 User profile generation method, apparatus, server, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933049A (en) * 2014-03-17 2015-09-23 华为技术有限公司 Method and system for generating digital human
CN107341679A (en) * 2016-04-29 2017-11-10 腾讯科技(深圳)有限公司 Obtain the method and device of user's portrait
WO2018000210A1 (en) * 2016-06-28 2018-01-04 深圳狗尾草智能科技有限公司 User portrait-based skill package recommendation device and method
CN108520058A (en) * 2018-03-30 2018-09-11 维沃移动通信有限公司 A kind of Business Information recommends method and mobile terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933049A (en) * 2014-03-17 2015-09-23 华为技术有限公司 Method and system for generating digital human
CN107341679A (en) * 2016-04-29 2017-11-10 腾讯科技(深圳)有限公司 Obtain the method and device of user's portrait
WO2018000210A1 (en) * 2016-06-28 2018-01-04 深圳狗尾草智能科技有限公司 User portrait-based skill package recommendation device and method
CN108520058A (en) * 2018-03-30 2018-09-11 维沃移动通信有限公司 A kind of Business Information recommends method and mobile terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582933A (en) * 2020-05-07 2020-08-25 北京点众科技股份有限公司 Method, terminal and storage medium for perfecting user portrait based on purchasing electronic book
CN111768828A (en) * 2020-09-03 2020-10-13 成都索贝数码科技股份有限公司 Patient sign portrait construction system and method based on data inside and outside hospital
WO2022111071A1 (en) * 2020-11-25 2022-06-02 Oppo广东移动通信有限公司 User profile generation method, apparatus, server, and storage medium

Similar Documents

Publication Publication Date Title
CN107734170B (en) Notification message processing method, mobile terminal and wearable device
CN110866038A (en) Information recommendation method and terminal equipment
CN108881782B (en) Video call method and terminal equipment
CN108984066B (en) Application icon display method and mobile terminal
CN108135033A (en) A kind of Bluetooth connecting method and mobile terminal
CN108388403B (en) Method and terminal for processing message
CN107734172B (en) Information display method and mobile terminal
CN110765170A (en) User portrait generation method and wearable device
CN109448069B (en) Template generation method and mobile terminal
CN109618218B (en) Video processing method and mobile terminal
CN109739423B (en) Alarm clock setting method and flexible terminal
CN108762621B (en) Message display method and mobile terminal
CN108718389B (en) Shooting mode selection method and mobile terminal
CN108984143B (en) Display control method and terminal equipment
CN110769186A (en) Video call method, first electronic device and second electronic device
CN107729100B (en) Interface display control method and mobile terminal
CN110007821B (en) Operation method and terminal equipment
CN108959585B (en) Expression picture obtaining method and terminal equipment
CN111080747A (en) Face image processing method and electronic equipment
CN111221602A (en) Interface display method and electronic equipment
CN111354460B (en) Information output method, electronic equipment and medium
CN109725805B (en) Alarm clock setting method and terminal
CN109947345B (en) Fingerprint identification method and terminal equipment
CN110083205B (en) Page switching method, wearable device and computer-readable storage medium
CN109819331B (en) Video call method, device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200207