CN105808774A - Information providing method and device - Google Patents
Information providing method and device Download PDFInfo
- Publication number
- CN105808774A CN105808774A CN201610184647.3A CN201610184647A CN105808774A CN 105808774 A CN105808774 A CN 105808774A CN 201610184647 A CN201610184647 A CN 201610184647A CN 105808774 A CN105808774 A CN 105808774A
- Authority
- CN
- China
- Prior art keywords
- dressing
- user
- information
- clothing
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000011156 evaluation Methods 0.000 claims description 109
- 210000000746 body region Anatomy 0.000 claims description 45
- 239000000463 material Substances 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 14
- 238000010586 diagram Methods 0.000 description 7
- 238000013178 mathematical model Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 210000003141 lower extremity Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000001364 upper extremity Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Tourism & Hospitality (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses an information providing method and device, and belongs to the field of smart home. The method includes the steps that a user image is obtained, dressing parameters of a user are determined according to the user image and include at least one of the clothing color, clothing style, clothing patterns, clothing texture, clothes thickness, hair style, skin color and accessories, assessing information for user dressing is generated according to the dressing parameters and a preset dressing matching model, and the assessing information is provided; the problem that a user cannot determine a real matching effect after matching own clothes according to a learned matching mode is solved, and real and accurate assessing information can be provided for the user according to dressing matching of the user.
Description
Technical Field
The disclosure relates to the field of smart home, and in particular, to an information providing method and device.
Background
In order to make people wear fashionable, people usually learn matching modes conforming to fashion trends by browsing fashion magazines or clothes matching pictures in the network, and choose clothes according to the learning matching modes during clothes wearing and matching.
However, after the user selects his/her clothes to match according to the matching manner of learning, the user cannot determine the true matching effect.
Disclosure of Invention
In order to solve the problem that a user cannot determine a real matching effect after selecting clothes matching, the disclosure provides an information providing method and device. The technical scheme is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an information providing method, including:
acquiring a user image;
determining the dressing parameters of the user according to the user image, wherein the dressing parameters comprise at least one of clothing color, clothing style, clothing pattern, clothing material, clothing thickness, hair style, skin color and accessories;
generating evaluation information of the dresses of the user according to the dresses parameters and a preset dresses matching model;
providing the evaluation information.
Optionally, generating evaluation information of the clothing of the user according to the clothing parameters and the preset clothing matching model, including:
acquiring position information of a user, and acquiring weather information of the position according to the position information;
and generating evaluation information according to the dressing parameters, the weather information and the preset dressing matching model, wherein the evaluation information comprises first evaluation sub-information of dressing thickness of the user and second evaluation sub-information of dressing matching of the user.
Optionally, before generating evaluation information on the clothing of the user according to the clothing parameters and the preset clothing matching model, the method further includes:
acquiring a dressing matching sample;
determining dressing parameters in the dressing matching sample;
and training according to the determined dressing parameters to obtain a preset dressing matching model.
Optionally, determining the dressing parameters of the user according to the image information includes:
identifying a human body region in the user image, the human body region including at least one of a head region, an upper body region, and a lower body region;
and determining the dressing parameters according to the content in the identified human body region.
Optionally, the method is used in an intelligent camera with a voice output function, and provides evaluation information, including: playing the evaluation information;
or, the method is used in a server and provides evaluation information, and comprises the following steps: and sending the evaluation information to a preset terminal, wherein the preset terminal is used for displaying and/or playing the evaluation information.
According to a second aspect of the embodiments of the present disclosure, there is provided an information providing apparatus including:
a first acquisition module configured to acquire a user image;
the first determining module is configured to determine the dressing parameters of the user according to the user image, wherein the dressing parameters comprise at least one of clothing color, clothing style, clothing pattern, clothing material, clothing thickness, hair style, skin color and accessories;
the generation module is configured to generate evaluation information of the dresses of the user according to the dresses parameters and the preset dresses matching model;
a providing module configured to provide the evaluation information.
Optionally, the generating module includes:
the acquisition unit is configured to acquire position information of a user and acquire weather information of the position according to the position information;
the generating unit is configured to generate evaluation information according to the dressing parameters, the weather information and the preset dressing matching model, wherein the evaluation information comprises first evaluation sub-information of dressing thickness of the user and second evaluation sub-information of dressing matching of the user.
Optionally, the apparatus further comprises:
a second obtaining module configured to obtain a dressing match sample;
a second determination module configured to determine a fit parameter in the fit-with-sample;
and the third determining module is configured to train according to the determined dressing parameters to obtain a preset dressing matching model.
Optionally, the first determining module includes:
a recognition unit configured to recognize a human body region in the user image, the human body region including at least one of a head region, an upper body region, and a lower body region;
and the determining unit is configured to determine the dressing parameters according to the content in the identified human body region.
Optionally, the apparatus is included in an intelligent camera, and the providing module is configured to play the evaluation information;
or, the device is included in a server, and the providing module is configured to send the evaluation information to a preset terminal, and the preset terminal is used for displaying and/or playing the evaluation information.
According to a third aspect of the embodiments of the present disclosure, there is provided an information providing apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a user image;
determining the dressing parameters of the user according to the user image, wherein the dressing parameters comprise at least one of clothing color, clothing style, clothing pattern, clothing material, clothing thickness, hair style, skin color and accessories;
generating evaluation information of the dresses of the user according to the dresses parameters and a preset dresses matching model;
providing the evaluation information.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
after the user image is identified and the dressing parameters are determined, the matching degree of the dressing parameters of the user is calculated according to a preset dressing matching model, the evaluation information of the dressing of the user is generated according to the matching degree, and the evaluation information is provided; the problem that a user cannot determine a real matching effect after selecting the clothes to match according to a learning matching mode is solved; the effect of providing real and accurate evaluation information for the user according to the actual clothes matching of the user is achieved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic illustration of an implementation environment in which an information provision method is shown according to an example embodiment;
FIG. 2 is a flow chart illustrating an information providing method according to an example embodiment;
FIG. 3A is a flow chart illustrating a method of providing information according to another exemplary embodiment;
FIG. 3B is a flow chart illustrating a method of providing information according to another exemplary embodiment;
FIG. 3C is a flow chart illustrating a method of providing information according to another exemplary embodiment;
FIG. 4 is a flow chart illustrating an information providing method according to another exemplary embodiment;
FIG. 5 is a block diagram illustrating an information providing apparatus according to an example embodiment;
fig. 6 is a block diagram illustrating an information providing apparatus according to another exemplary embodiment;
fig. 7 is a block diagram illustrating an information providing apparatus according to another exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a schematic diagram illustrating an implementation environment involved with an information providing method according to an example embodiment, which may include: smart camera 120, and server 140.
The smart camera 120 has an image capturing function capable of capturing images of a user, and optionally, the smart camera 120 further has a voice output function and a positioning function.
The smart camera 120 may be a smart camera device mounted with a voice output module and a GPS (global positioning system), or a camera in a smart terminal having a camera function, such as a mobile phone, a tablet computer, a portable computer, or the like.
The smart camera 120 may be connected to the server 140 through a Wireless network manner such as Wi-Fi (Wireless-Fidelity) and bluetooth or a wired network manner.
The server 140 may be a server, a server cluster composed of several servers, or a cloud computing service center.
It should be noted that: the information providing method provided by the embodiment of the present disclosure may include a plurality of smart cameras 120 connected to the server 140, which is not limited in the embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating an information providing method according to an exemplary embodiment, where the information providing method may be used in the smart camera 120 shown in fig. 1 or in the server 140 shown in fig. 1, and this embodiment is illustrated by applying the information providing method to the server 140 shown in fig. 1. The method may comprise the steps of:
in step 201, a user image is acquired.
In step 202, the wearing parameters of the user are determined according to the user image, and the wearing parameters include at least one of clothing color, clothing style, clothing pattern, clothing material, clothing thickness, hair style, skin color and accessories.
In step 203, evaluation information of the user's clothing is generated according to the clothing parameters and the preset clothing matching model.
In step 204, evaluation information is provided.
To sum up, in the information providing method provided by the embodiment of the present disclosure, the user image is obtained, the user image includes the image after the user performs clothing matching, the user image includes clothing parameters such as the clothing color, the clothing style, the clothing pattern, the clothing material, the clothing thickness, the hair style, the skin color, and the accessories of the user worn by the user, after the user image is identified and the clothing parameters are determined, the matching degree between the clothing parameters of the user is calculated according to the preset clothing matching model, the evaluation information of the clothing of the user is generated according to the matching degree, and the evaluation information is provided; the problem that a user cannot determine a real matching effect after selecting the clothes to match according to a learning matching mode is solved; the effect of providing real and accurate evaluation information for the user according to the actual clothes matching of the user is achieved.
Fig. 3A is a flowchart illustrating an information providing method according to another exemplary embodiment, where the information providing method may be used in the smart camera 120 shown in fig. 1 or in the server 140 shown in fig. 1, and this embodiment is illustrated in the case where the information providing method is applied to the server 140 shown in fig. 1. The method may comprise the steps of:
in step 301, a user image is acquired.
The intelligent camera can collect user images and send the collected user images to the server. Correspondingly, the server can receive the user image sent by the intelligent camera. Wherein, the user image comprises at least one character image.
In one possible example, the intelligent camera is placed on the dressing mirror, and when the user uses the dressing mirror, the user can trigger the intelligent camera to acquire the user image and send the user image to the server in a mode of triggering a switch of the intelligent camera.
In step 302, a human body region in the user image is identified, the human body region including at least one of a head region, an upper body region, and a lower body region.
In one possible implementation, the step of the server identifying the human body region in the user image may include: the server identifies a head region in the user image through a face identification technology, and identifies an upper body region and a lower body region of an accurate foreground person through an iterative Graphcut algorithm according to a human body shape template and a color model of a foreground background after the position and the size of the head region are determined.
It should be noted that the identified human body region may be a head region, an upper body region, or a lower body region in this embodiment, or may be other types of human body regions such as an upper limb region, a lower limb region, and an upper body torso region, and the present embodiment does not limit the identification criteria of the human body region, and only the identified human body region is the head region, the upper body region, and the lower body region, for example.
In step 303, a dressing parameter is determined according to the content in the identified human body region, wherein the dressing parameter includes at least one of a garment color, a garment style, a garment pattern, a garment material, a garment thickness, a hair style, a skin color, and an accessory.
Optionally, the dressing parameters include content identified from at least one human region.
For example, the dressing parameters include, but are not limited to, identifying a hairstyle in the head region, identifying a color, style, and pattern of the upper garment in the upper body region, and identifying a color, material, and thickness of the trousers in the lower body region.
In step 304, evaluation information of the clothing of the user is generated according to the clothing parameters and the preset clothing matching model.
The preset dressing matching model comprises: at least one of a matching degree between at least two of a garment color, a garment style, a garment pattern, a garment material, a garment thickness, a hairstyle, a skin color, and an accessory, and a matching degree between different garment colors, different garment styles, different garment patterns, or different garment materials.
Optionally, the matching degree is expressed by a percentage of 0% to 100%, where 0% represents the lowest matching degree and 100% represents the highest matching degree, and this embodiment does not limit this. Wherein, the higher the matching degree, the better the collocation is.
For example, the preset dressing model includes a matching degree between the style of the garment and the accessory, and the preset dressing model may include: the matching degree of the short sleeves and the scarf is 10 percent, and the matching degree of the overskirt and the sunglasses is 70 percent; for another example, the preset clothing model may include a matching degree between different colors, where the matching degree between red and green is 5%, the matching degree between red and white is 70%, and the matching degree between beige and black is 60%.
Optionally, this step may include: the server calculates the matching degree of different contents in the clothing parameters according to the preset clothing matching model, and generates evaluation information according to the matching degree. In this case, the higher the matching degree is, the higher the evaluation of the generated evaluation information becomes.
For example, if the dressing parameter is that a red shirt is matched with a green skirt, the matching degree of red and green is 0% and the matching degree between the shirt and the skirt is 80% according to the preset dressing matching model, and then the evaluation information with the matching score of 6 is generated.
The evaluation information generated by the server may be an overall evaluation of the user's clothing arrangement, or may be a suggestion made on a part of the clothing parameters with a low degree of matching calculated. For example, by taking the example that the evaluation information is a suggestion for the user to wear and match, when the calculated color matching degree of the green coat and the red trousers is low, the suggestion is that "the color matching is not good, and the green coat can try the white trousers".
In step 305, the evaluation information is sent to a preset terminal, and the preset terminal is used for displaying and/or playing the evaluation information.
In the implementation environment shown in fig. 1, the smart camera 120 may further establish a binding relationship with at least one predetermined terminal, where the predetermined terminal may be an electronic device such as a mobile phone, a tablet computer, or a portable computer, and the predetermined terminal is not shown in fig. 1. At this moment, the server may store a corresponding relationship between the intelligent camera and the terminal identifier of the preset terminal, and after receiving the user image sent by the intelligent camera and generating the evaluation information, the server obtains the terminal identifier of the preset terminal bound to the intelligent camera, and sends the evaluation information to the preset terminal according to the terminal identifier.
And the preset terminal is provided with an application program corresponding to the intelligent camera, and displays and/or plays the received evaluation information in the application program.
Optionally, the terminal identifier of the preset terminal may be an IP address (internet protocol address) or an MAC address (media access control address) of the preset terminal.
It should be added that, before step 304, the server may further perform the following steps, as shown in fig. 3B:
in step 306, the position information of the user is obtained, and weather information of the position is obtained according to the position information.
The intelligent camera acquires the position information of the intelligent camera, namely the position information of the user through the GPS, and sends the acquired position information to the server. Correspondingly, the server receives the position information of the user sent by the intelligent camera, and acquires the weather information of the position from the network according to the position information of the user.
Optionally, the location information is longitude and latitude information, and the server determines a corresponding city according to the longitude and latitude information and acquires weather information of the city.
Optionally, the weather information includes at least one of temperature, humidity, wind power level, weather condition, air pressure, and ultraviolet index, where the weather condition may be sunny day, rainy day, cloudy day, or snowy day, and the content of the weather information is not limited in this embodiment.
For example, if the server acquires that the position information is 116 degrees of east longitude and 40 degrees of north latitude through the intelligent camera, the server determines that the position is Beijing City according to the position information, and acquires weather information of the Beijing City.
It should be noted that step 306 has no specific precedence relationship with step 301.
Accordingly, step 304 may be replaced with step 307: and generating evaluation information according to the dressing parameters, the weather information and the preset dressing matching model, wherein the evaluation information comprises first evaluation sub-information of dressing thickness of the user and second evaluation sub-information of dressing matching of the user.
Optionally, step 307 may include:
(1) and generating first evaluation sub-information of the dressing thickness of the user according to the dressing parameters, the weather information and the dressing matching model.
The server calculates the matching degree of the dressing thickness in the dressing parameters and the obtained weather conditions according to the preset dressing matching model, and generates evaluation information according to the matching degree. In this case, the higher the matching degree is, the higher the evaluation of the generated evaluation information becomes.
Alternatively, the first evaluation sub-information may be an evaluation of dressing thickness. For example, the weather conditions acquired by the server are: air temperature 5 degrees centigrade, wind power level 4, and it is determined that the obtained dressing parameters are sweater and jeans, the server may generate the first evaluation sub-information such as "you wear a little bit less today".
(2) And generating second evaluation sub-information for the dress collocation of the user according to the dress parameters and the dress collocation model.
Step 307 is similar to step 304, and is not described herein again.
In an exemplary example, the determining, by the server, the dressing parameters of the user based on the received user image includes: green short-sleeved shirt, red trousers and black shoes, the weather information that the server was confirmed through the positional information that has the intelligent camera of locate function and acquires includes: and at the temperature of 5 ℃, the server generates first evaluation sub-information which is cold today and little whether a master wears the clothes or not according to the clothes parameters, the weather information and the preset clothes matching model, and generates second evaluation sub-information which is not good in color matching and enables the green jacket to try white trousers, and the first evaluation sub-information and the second evaluation sub-information are sent to the binding terminal and displayed in the binding terminal.
It should be noted that the information providing method described above may also be used in the smart camera 120 with a voice output function, and when the information providing method is used in the smart camera 120, the step 305 described above may be alternatively implemented as the following step 308, as shown in fig. 3C:
in step 308, the rating information is played.
In this embodiment, the intelligent camera plays the evaluation information through the voice output module.
To sum up, in the information providing method provided by the embodiment of the present disclosure, the user image is obtained, the user image includes the image after the user performs clothing matching, the user image includes clothing parameters such as the clothing color, the clothing style, the clothing pattern, the clothing material, the clothing thickness, the hair style, the skin color, and the accessories of the user worn by the user, after the user image is identified and the clothing parameters are determined, the matching degree between the clothing parameters of the user is calculated according to the preset clothing matching model, the evaluation information of the clothing of the user is generated according to the matching degree, and the evaluation information is provided; the problem that a user cannot determine a real matching effect after selecting the clothes to match according to a learning matching mode is solved; the effect of providing real and accurate evaluation information for the user according to the actual clothes matching of the user is achieved.
The embodiment determines the evaluation information according to the weather information, the dressing parameters and the preset dressing matching model, and achieves the effect of evaluating the dressing thickness of the user.
In another alternative embodiment based on the above embodiment, the following steps are further included before the step 304, as shown in fig. 4:
in step 401, a dress collocation sample is obtained.
Optionally, the server obtains a clothing matching sample from the picture and/or the video, where the clothing matching sample may be a star street photo, a clothing matching picture in a fashion magazine, a clothing matching occurring in a movie and television scenario, and the like, and this embodiment does not limit this.
In step 402, the dressing parameters in the dressing-match sample are determined.
The method for determining the dressing parameters of the dressing match sample by identifying the human body region in the dressing match sample and identifying the content in the human body region by the server is similar to the method of step 302 and step 303, and is not repeated in this embodiment.
In step 403, a preset dressing matching model is obtained according to the determined dressing parameters.
Optionally, the server determines a preset mathematical model, inputs the determined dressing parameters into the preset mathematical model for iteration, and determines a final preset dressing matching model, where the preset mathematical model is not limited in this embodiment.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 5 is a block diagram illustrating an information providing apparatus according to an exemplary embodiment, as shown in fig. 5, which is applied to the implementation environment shown in fig. 1, and which may be implemented as all or a part of the intelligent camera or the server for providing the information providing method, and which includes, but is not limited to:
a first acquisition module 510 configured to acquire a user image.
A first determining module 520 configured to determine the user's dressing parameters according to the user image, the dressing parameters including at least one of a garment color, a garment style, a garment pattern, a garment material, a garment thickness, a hair style, a skin color, and accessories.
A generating module 530 configured to generate evaluation information for the clothing of the user according to the clothing parameters and the preset clothing matching model.
A providing module 540 configured to provide the evaluation information.
To sum up, the information providing apparatus provided in the embodiment of the present disclosure obtains the user image, where the user image includes the image after the user performs clothing matching, and the user image includes clothing parameters such as a clothing color, a clothing style, a clothing pattern, a clothing material, a clothing thickness, a hair style, a skin color, and an accessory worn by the user, identifies the user image to determine the clothing parameters, calculates matching degrees between the clothing parameters of the user according to a preset clothing matching model, generates evaluation information on the clothing of the user according to the matching degrees, and provides the evaluation information; the problem that a user cannot determine a real matching effect after selecting the clothes to match according to a learning matching mode is solved; the effect of providing real and accurate evaluation information for the user according to the actual clothes matching of the user is achieved.
Fig. 6 is a block diagram illustrating an information providing apparatus according to an exemplary embodiment, as shown in fig. 6, which is applied to the implementation environment shown in fig. 1, and which may be implemented as all or a part of the above-mentioned smart camera or server for providing an information providing method, and the apparatus includes, but is not limited to:
a first acquisition module 610 configured to acquire a user image.
A first determining module 620 configured to determine the user's dressing parameters according to the user image, the dressing parameters including at least one of a garment color, a garment style, a garment pattern, a garment material, a garment thickness, a hair style, a skin color, and accessories.
The first determination module 620 includes:
a recognition unit 621 configured to recognize a human body region in the user image, the human body region including at least one of a head region, an upper body region, and a lower body region;
a determining unit 622 configured to determine the dressing parameters according to the content in the identified human body region.
A second obtaining module 630 configured to obtain a dressing-match sample;
a second determination module 640 configured to determine the dressing parameters in the dressing-match sample;
and a third determining module 650 configured to train a preset dressing matching model according to the determined dressing parameters.
The generating module 660 is configured to generate evaluation information of the clothing of the user according to the clothing parameters and the preset clothing matching model.
The generating module 660 includes:
an obtaining unit 661 configured to obtain location information of a user, and obtain weather information of a location according to the location information;
the generating unit 662 is configured to generate evaluation information according to the dressing parameters, the weather information and the preset dressing matching model, wherein the evaluation information comprises first evaluation sub-information of dressing thickness of the user and second evaluation sub-information of dressing matching of the user.
A providing module 670 configured to provide the evaluation information.
Optionally, when the information providing value is included in the smart camera, the providing module 670 is configured to play the evaluation information;
and/or the information providing apparatus is included in a server, and the providing module 670 is configured to send the evaluation information to a preset terminal, where the preset terminal is used to display and/or play the evaluation information.
To sum up, the information providing apparatus provided in the embodiment of the present disclosure obtains the user image, where the user image includes the image after the user performs clothing matching, and the user image includes clothing parameters such as a clothing color, a clothing style, a clothing pattern, a clothing material, a clothing thickness, a hair style, a skin color, and an accessory worn by the user, identifies the user image to determine the clothing parameters, calculates matching degrees between the clothing parameters of the user according to a preset clothing matching model, generates evaluation information on the clothing of the user according to the matching degrees, and provides the evaluation information; the problem that a user cannot determine a real matching effect after selecting the clothes to match according to a learning matching mode is solved; the effect of providing real and accurate evaluation information for the user according to the actual clothes matching of the user is achieved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An exemplary embodiment of the present disclosure provides an information providing apparatus capable of implementing an information providing method provided by the present disclosure, the apparatus including: a processor, a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a user image;
determining the dressing parameters of the user according to the user image, wherein the dressing parameters comprise at least one of clothing color, clothing style, clothing pattern, clothing material, clothing thickness, hair style, skin color and accessories;
generating evaluation information of the dresses of the user according to the dresses parameters and a preset dresses matching model;
providing the evaluation information.
Fig. 7 is a block diagram illustrating an information providing apparatus according to another exemplary embodiment. For example, the apparatus 700 may be provided as a network side device. Referring to fig. 7, the apparatus 700 includes a processing component 702 that further includes one or more processors and memory resources, represented by memory 704, for storing instructions, such as applications, that are executable by the processing component 702. The application programs stored in memory 704 may include one or more modules that each correspond to a set of instructions. Further, the processing component 702 is configured to execute instructions to perform the above-described information providing method.
The apparatus 700 may also include a power component 706 configured to perform power management of the apparatus 700, a wired or wireless network interface 708 configured to connect the apparatus 700 to a network, and an input output (I/O) interface 710. The apparatus 700 may operate based on an operating system stored in memory 704, such as Windows Server, MacOSXTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (11)
1. An information providing method, characterized in that the method comprises:
acquiring a user image;
determining the dressing parameters of the user according to the user image, wherein the dressing parameters comprise at least one of clothing color, clothing style, clothing pattern, clothing material, clothing thickness, hair style, skin color and accessories;
generating evaluation information of the dress of the user according to the dress parameters and a preset dress matching model;
and providing the evaluation information.
2. The method of claim 1, wherein generating evaluation information for the user's clothing based on the clothing parameters and a preset clothing matching model comprises:
acquiring the position information of the user, and acquiring the weather information of the position according to the position information;
and generating the evaluation information according to the dressing parameters, the weather information and the preset dressing matching model, wherein the evaluation information comprises first evaluation sub-information of dressing thickness of the user and second evaluation sub-information of dressing matching of the user.
3. The method of claim 1, wherein before generating evaluation information for the user's clothing based on the clothing parameters and a preset clothing matching model, the method further comprises:
acquiring a dressing matching sample;
determining dressing parameters in the dressing collocation sample;
and training according to the determined dressing parameters to obtain the preset dressing matching model.
4. The method of any of claims 1 to 3, wherein said determining a user's dressing parameters from said user image comprises:
identifying a human body region in the user image, the human body region including at least one of a head region, an upper body region, and a lower body region;
and determining the dressing parameters according to the content in the identified human body region.
5. The method according to any one of claims 1 to 3,
the method is used in an intelligent camera with a voice output function, and the providing the evaluation information comprises the following steps: playing the evaluation information;
or,
the method is used in a server, and the providing the evaluation information comprises the following steps: and sending the evaluation information to a preset terminal, wherein the preset terminal is used for displaying and/or playing the evaluation information.
6. An information providing apparatus, characterized in that the apparatus comprises:
a first acquisition module configured to acquire a user image;
a first determining module configured to determine a dressing parameter of a user according to the user image, wherein the dressing parameter comprises at least one of a garment color, a garment style, a garment pattern, a garment material, a garment thickness, a hair style, a skin color and an accessory;
the generating module is configured to generate evaluation information of the dress of the user according to the dress parameters and a preset dress matching model;
a providing module configured to provide the evaluation information.
7. The apparatus of claim 6, wherein the generating module comprises:
the acquisition unit is configured to acquire the position information of the user and acquire the weather information of the position according to the position information;
the generating unit is configured to generate the evaluation information according to the dressing parameters, the weather information and the preset dressing matching model, wherein the evaluation information comprises first evaluation sub-information of dressing thickness of the user and second evaluation sub-information of dressing matching of the user.
8. The apparatus of claim 6, further comprising:
a second obtaining module configured to obtain a dressing match sample;
a second determination module configured to determine a fit parameter in the fit-with-fit sample;
and the third determining module is configured to train according to the determined dressing parameters to obtain the preset dressing matching model.
9. The apparatus of any of claims 6 to 8, wherein the first determining module comprises:
a recognition unit configured to recognize a human body region in the user image, the human body region including at least one of a head region, an upper body region, and a lower body region;
a determination unit configured to determine the dressing parameter according to the identified content in the human body region.
10. The apparatus according to any one of claims 6 to 8,
the device is contained in an intelligent camera, and the providing module is configured to play the evaluation information;
or,
the device is contained in a server, and the providing module is configured to send the evaluation information to a preset terminal, wherein the preset terminal is used for displaying and/or playing the evaluation information.
11. An information providing apparatus, characterized in that the apparatus comprises:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to:
acquiring a user image;
determining the dressing parameters of the user according to the user image, wherein the dressing parameters comprise at least one of clothing color, clothing style, clothing pattern, clothing material, clothing thickness, hair style, skin color and accessories;
generating evaluation information of the dress of the user according to the dress parameters and a preset dress matching model;
and providing the evaluation information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610184647.3A CN105808774A (en) | 2016-03-28 | 2016-03-28 | Information providing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610184647.3A CN105808774A (en) | 2016-03-28 | 2016-03-28 | Information providing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105808774A true CN105808774A (en) | 2016-07-27 |
Family
ID=56454987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610184647.3A Pending CN105808774A (en) | 2016-03-28 | 2016-03-28 | Information providing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105808774A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446065A (en) * | 2016-09-06 | 2017-02-22 | 珠海市魅族科技有限公司 | Clothes collocation recommendation method and device |
CN106558052A (en) * | 2016-10-10 | 2017-04-05 | 北京光年无限科技有限公司 | A kind of interaction data for intelligent robot processes output intent and robot |
CN106709635A (en) * | 2016-12-05 | 2017-05-24 | 上海斐讯数据通信技术有限公司 | Clothing management and dressing auxiliary method and system |
CN107038622A (en) * | 2017-03-23 | 2017-08-11 | 珠海格力电器股份有限公司 | Intelligent wardrobe control method and system and intelligent wardrobe |
CN107590584A (en) * | 2017-08-14 | 2018-01-16 | 上海爱优威软件开发有限公司 | Dressing collocation reviewing method |
CN108363750A (en) * | 2018-01-29 | 2018-08-03 | 广东欧珀移动通信有限公司 | Clothes recommend method and Related product |
CN109598578A (en) * | 2018-11-09 | 2019-04-09 | 深圳壹账通智能科技有限公司 | The method for pushing and device of business object data, storage medium, computer equipment |
CN109978720A (en) * | 2017-12-28 | 2019-07-05 | 深圳市优必选科技有限公司 | Wearing grading method and device, intelligent equipment and storage medium |
CN111242016A (en) * | 2020-01-10 | 2020-06-05 | 深圳数联天下智能科技有限公司 | Clothes management method, control device, wardrobe and computer-readable storage medium |
CN112115345A (en) * | 2019-06-21 | 2020-12-22 | 青岛海尔洗衣机有限公司 | Travel prompting method and intelligent travel equipment |
CN113204663A (en) * | 2021-04-23 | 2021-08-03 | 广州未来一手网络科技有限公司 | Information processing method and device for clothing matching |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030076318A1 (en) * | 2001-10-19 | 2003-04-24 | Ar Card | Method of virtual garment fitting, selection, and processing |
CN1949265A (en) * | 2006-11-02 | 2007-04-18 | 中山大学 | Intelligent dressing suggesting system based on digital family |
CN102842102A (en) * | 2012-06-29 | 2012-12-26 | 惠州Tcl移动通信有限公司 | Intelligent auxiliary dressing device and method |
CN104981830A (en) * | 2012-11-12 | 2015-10-14 | 新加坡科技设计大学 | Clothing matching system and method |
CN105096335A (en) * | 2015-09-17 | 2015-11-25 | 无锡天脉聚源传媒科技有限公司 | Evaluation information transmission method and device |
-
2016
- 2016-03-28 CN CN201610184647.3A patent/CN105808774A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030076318A1 (en) * | 2001-10-19 | 2003-04-24 | Ar Card | Method of virtual garment fitting, selection, and processing |
CN1949265A (en) * | 2006-11-02 | 2007-04-18 | 中山大学 | Intelligent dressing suggesting system based on digital family |
CN102842102A (en) * | 2012-06-29 | 2012-12-26 | 惠州Tcl移动通信有限公司 | Intelligent auxiliary dressing device and method |
CN104981830A (en) * | 2012-11-12 | 2015-10-14 | 新加坡科技设计大学 | Clothing matching system and method |
CN105096335A (en) * | 2015-09-17 | 2015-11-25 | 无锡天脉聚源传媒科技有限公司 | Evaluation information transmission method and device |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446065A (en) * | 2016-09-06 | 2017-02-22 | 珠海市魅族科技有限公司 | Clothes collocation recommendation method and device |
CN106558052A (en) * | 2016-10-10 | 2017-04-05 | 北京光年无限科技有限公司 | A kind of interaction data for intelligent robot processes output intent and robot |
CN106709635A (en) * | 2016-12-05 | 2017-05-24 | 上海斐讯数据通信技术有限公司 | Clothing management and dressing auxiliary method and system |
CN107038622A (en) * | 2017-03-23 | 2017-08-11 | 珠海格力电器股份有限公司 | Intelligent wardrobe control method and system and intelligent wardrobe |
CN107590584A (en) * | 2017-08-14 | 2018-01-16 | 上海爱优威软件开发有限公司 | Dressing collocation reviewing method |
CN109978720A (en) * | 2017-12-28 | 2019-07-05 | 深圳市优必选科技有限公司 | Wearing grading method and device, intelligent equipment and storage medium |
CN108363750B (en) * | 2018-01-29 | 2022-01-04 | Oppo广东移动通信有限公司 | Clothing recommendation method and related products |
CN108363750A (en) * | 2018-01-29 | 2018-08-03 | 广东欧珀移动通信有限公司 | Clothes recommend method and Related product |
CN109598578A (en) * | 2018-11-09 | 2019-04-09 | 深圳壹账通智能科技有限公司 | The method for pushing and device of business object data, storage medium, computer equipment |
CN112115345A (en) * | 2019-06-21 | 2020-12-22 | 青岛海尔洗衣机有限公司 | Travel prompting method and intelligent travel equipment |
CN112115345B (en) * | 2019-06-21 | 2024-02-09 | 青岛海尔洗衣机有限公司 | Travel prompt method and intelligent travel equipment |
CN111242016A (en) * | 2020-01-10 | 2020-06-05 | 深圳数联天下智能科技有限公司 | Clothes management method, control device, wardrobe and computer-readable storage medium |
CN113204663A (en) * | 2021-04-23 | 2021-08-03 | 广州未来一手网络科技有限公司 | Information processing method and device for clothing matching |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105808774A (en) | Information providing method and device | |
US20090251484A1 (en) | Avatar for a portable device | |
CN106156730B (en) | A kind of synthetic method and device of facial image | |
US9986812B2 (en) | Makeup application assistance device, makeup application assistance system, and makeup application assistance method | |
TWI615776B (en) | Method and system for creating virtual message onto a moving object and searching the same | |
CN103778376A (en) | Information processing device and storage medium | |
CN109299658B (en) | Face detection method, face image rendering device and storage medium | |
WO2022116604A1 (en) | Image captured image processing method and electronic device | |
CN109242940B (en) | Method and device for generating three-dimensional dynamic image | |
JP7342366B2 (en) | Avatar generation system, avatar generation method, and program | |
WO2023138345A1 (en) | Virtual image generation method and system | |
CN109949207B (en) | Virtual object synthesis method and device, computer equipment and storage medium | |
KR20120046653A (en) | System and method for recommending hair based on face and style recognition | |
CN107705245A (en) | Image processing method and device | |
CN113298956A (en) | Image processing method, nail beautifying method and device, and terminal equipment | |
CN111429543B (en) | Material generation method and device, electronic equipment and medium | |
CN114723860B (en) | Method, device and equipment for generating virtual image and storage medium | |
CN108010038B (en) | Live-broadcast dress decorating method and device based on self-adaptive threshold segmentation | |
CN107203646A (en) | A kind of intelligent social sharing method and device | |
CN111429210A (en) | Method, device and equipment for recommending clothes | |
US8994834B2 (en) | Capturing photos | |
KR20120076492A (en) | System and method for recommending hair based on face and style recognition | |
KR101738896B1 (en) | Fitting virtual system using pattern copy and method therefor | |
CN112702520A (en) | Object photo-combination method and device, electronic equipment and computer-readable storage medium | |
CN106407421A (en) | A dress-up matching evaluation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160727 |
|
RJ01 | Rejection of invention patent application after publication |