CN110363867A - Virtual dress up system, method, equipment and medium - Google Patents
Virtual dress up system, method, equipment and medium Download PDFInfo
- Publication number
- CN110363867A CN110363867A CN201910640937.8A CN201910640937A CN110363867A CN 110363867 A CN110363867 A CN 110363867A CN 201910640937 A CN201910640937 A CN 201910640937A CN 110363867 A CN110363867 A CN 110363867A
- Authority
- CN
- China
- Prior art keywords
- dress ornament
- dress
- key point
- posture
- human body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000009877 rendering Methods 0.000 claims abstract description 131
- 239000011521 glass Substances 0.000 claims abstract description 47
- 230000015654 memory Effects 0.000 claims abstract description 22
- 230000003190 augmentative effect Effects 0.000 claims abstract description 19
- 230000000694 effects Effects 0.000 claims description 36
- 238000005286 illumination Methods 0.000 claims description 26
- 210000002414 leg Anatomy 0.000 claims description 15
- 210000000245 forearm Anatomy 0.000 claims description 14
- 230000001815 facial effect Effects 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 12
- 210000000689 upper leg Anatomy 0.000 claims description 12
- 230000000007 visual effect Effects 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 10
- 210000001699 lower leg Anatomy 0.000 claims description 8
- 210000001015 abdomen Anatomy 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 7
- 235000013399 edible fruits Nutrition 0.000 claims description 6
- 230000003252 repetitive effect Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- 238000005034 decoration Methods 0.000 claims description 5
- 230000001550 time effect Effects 0.000 claims description 5
- 238000004088 simulation Methods 0.000 claims description 4
- 239000011800 void material Substances 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims 1
- 210000000887 face Anatomy 0.000 description 18
- 238000010586 diagram Methods 0.000 description 8
- 210000003128 head Anatomy 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 3
- 239000003550 marker Substances 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000001737 promoting effect Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B2027/0178—Eyeglass type
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Optics & Photonics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention relates to a kind of virtually dress up system, method, equipment and media, which includes: glasses, which includes memory and processor, are used for: providing dress ornament selection interface, receive the selected selected dress ornament of user;Vision data is acquired, the body image reflected by mirror is included;Current human's posture is obtained according to vision data;The dress ornament model for obtaining selected dress ornament obtains the dress ornament key point of selected dress ornament and the match information of human body key point;The dress ornament model for having matched current human's posture of selected dress ornament is obtained according to the dress ornament model and match information of current human's posture and selected dress ornament;It is rendered to obtain dress ornament rendering result according to the dress ornament model for having matched current human's posture;Show dress ornament rendering result.Using of the invention virtually dress up system, method, equipment and medium, user only needs wearing augmented reality glasses and using arbitrary general mirror at one's side, can be achieved with experience of virtually dressing up, flexibly and easily, user experience is good.
Description
Technical field
The present invention relates to augmented reality fields, more particularly to a kind of virtually dress up system, method, equipment and Jie
Matter.
Background technique
Virtually dress up (virtual dressing) be also referred to as virtually trying, refer to that user selects one, a kind of, more than one piece
Or after multiclass fashion dress ornament, the two dimension of the fashion dress ornament or 3-D image are superimposed upon with user, such user can see
Observing dress up these upper fashion dress ornaments oneself is what.
The method that existing a variety of realizations are virtually dressed up.In these methods, augmented reality and movement is utilized in some
Tracking technique generates video, so that user appears to wear virtual dress ornament in the video.This method is needed using special
The virtual mirror very designed, including screen, camera and processing unit for rendering virtual dress ornament, for capturing user
Posture fashion dress ornament to be presented on the user, and using being shown on the screen.It is used for angle from user, it is this
System is inflexible, and user experience is poor.If user sets this system at home, the system cannot be used outside.
Summary of the invention
The purpose of the present invention is to provide a kind of new virtually dress up system, method, equipment and media, in use process
In, user only needs wearing augmented reality glasses and using arbitrary general mirror at one's side, can be achieved with experience of virtually dressing up, spirit
Living convenient, user experience is good.
The purpose of the present invention is realized using following technical scheme.System proposed according to the present invention of virtually dressing up, it is described
System includes: glasses, includes memory and processor;Dress ornament selecting module receives user institute for providing dress ornament selection interface
The dress ornament selected is as selected dress ornament;Vision data acquisition module, for acquiring vision data, the vision data includes by mirror
The body image that son reflects;Gesture recognition and locating module, for obtaining being reflected by the mirror according to the vision data
Current human's posture of human body out;Wherein, current human's posture includes the current pose information of multiple human body key points;
Dress ornament data acquisition module, for obtaining the dress ornament model of the selected dress ornament, the dress ornament model includes that multiple dress ornaments are crucial
Point obtains the dress ornament key point of the selected dress ornament and the match information of the human body key point;Dress ornament and human body attitude
Fusion Module, for according to the current pose information of the human body key point and the dress ornament model of the selected dress ornament
With the match information, come determine the selected dress ornament the dress ornament key point current pose information, to obtain the choosing
Determine the dress ornament model for having matched current human's posture of dress ornament;Rendering module, for having matched current human's posture according to described
Dress ornament model rendered to obtain dress ornament rendering result;Dress ornament display module, for by folding the dress ornament rendering result
It is added on the body image that user sees and shows the dress ornament rendering result, to show dress using augmented reality form
Play the part of effect;
Wherein, the dress ornament selecting module, the vision data acquisition module, the gesture recognition and locating module, institute
State dress ornament data acquisition module, the dress ornament and human body attitude Fusion Module, the rendering module, the dress ornament display module it
One or more of be set on the memory of the glasses.
The purpose of the present invention can also be further achieved by the following technical measures.
System above-mentioned of virtually dressing up, the gesture recognition and locating module include that mirror identification positioning unit and posture are known
Not and positioning unit;The mirror identification positioning unit, which is used to come out the region segmentation of the mirror in the vision data, to be obtained
Vision data in mirror, vision data includes the body image in the mirror;The gesture recognition and positioning unit are used for basis
Vision data obtains current human's posture in the mirror.
System above-mentioned of virtually dressing up, the vision data acquisition module are specifically used for acquisition two-dimensional visual data;It is described
Gesture recognition is specifically used for locating module: in the way of DensePose, estimating to have obtained illiteracy according to the two-dimensional visual data
More people's linear models of skin as current human's posture, including obtain it is described have it is multiple in more people's linear models of covering
The current pose information of the three-dimensional human body key point.
Above-mentioned virtually to dress up system, the multiple human body key point is head, neck, left shoulder, right shoulder, left large arm, right big
Arm, left forearm, right forearm, left hand, the right hand, chest, abdomen, left thigh, right thigh, left leg, right leg, left foot and right crus of diaphragm.
System above-mentioned of virtually dressing up, the dress ornament data acquisition module are specifically used for: obtaining the clothes of the selected dress ornament
Threedimensional model is adornd, the matching of the three-dimensional dress ornament key point and the three-dimensional human body key point of the selected dress ornament is obtained
Information;The rendering module is specifically used for: the dress ornament model that current human's posture has been matched according to three-dimensional renders
Obtain the dress ornament two dimensional image of the selected dress ornament;The dress ornament display module is specifically used for: by by the dress ornament X-Y scheme
The dress ornament two dimensional image is shown as being added on the body image that user sees.
System above-mentioned of virtually dressing up, the dress ornament rendering module includes lighting simulation unit, is used for: obtaining current light
According to condition;When being rendered according to the dress ornament model for having matched current human's posture, according to the illumination condition dynamic
Set the light source of virtual world.
System above-mentioned of virtually dressing up, the system also includes calibration modules, for by using face information as calibration
Label, to be calibrated to the position to be presented of the dress ornament rendering result;The dress ornament display module is specifically used for: according to
The dress ornament rendering result after calibration is added to the body image that user sees by the position to be presented after calibration
On.
System above-mentioned of virtually dressing up, the calibration module includes: face key point recognition unit, for identification the people
The facial key point of body image;Standard faces display unit, for showing preset standard faces image using the glasses, and
Prompt user is moved with by user's face and the standard faces image alignment;Face is aligned judging unit, is used for basis
The facial key point identified judge user's face whether with the standard faces image alignment;Calibration unit is used for
When the user's face and the standard faces image alignment, determine that the coordinate mapping of camera coordinates system and displaing coordinate system is closed
System, to calibrate the position to be presented of the dress ornament rendering result according to the coordinate mapping relations.
System above-mentioned of virtually dressing up, the system also includes databases, for storing the dress ornament mould of multiple optional dress ornaments
One or more dress ornament key points and one or more human body key points in the dress ornament model of type and the optional dress ornament
Match information;The dress ornament data acquisition module is specifically used for: recalling the clothes of the selected dress ornament from the database
Adorn the dress ornament key point of model and the selected dress ornament and the match information of the human body key point;The system is also wrapped
Input of Data module is included, is used for: receiving the dress ornament model of the optional dress ornament in advance, by the dress ornament model of the optional dress ornament
In one or more dress ornament key points and human body key point match to obtain match information, and database described in typing.
System above-mentioned of virtually dressing up, further includes dress ornament effect memorandum module, is used for: every between a preset time
Every utilizing the dress ornament selecting module, the vision data acquisition module, the gesture recognition and locating module, the dress ornament
Data acquisition module, the dress ornament and human body attitude Fusion Module, the rendering module and the dress ornament display module, repeat into
The row acquisition vision data is to described by the way that the dress ornament rendering result is added on the body image that user sees
Multiple steps of the dress ornament rendering result are shown, to display in real time effect of dressing up;Described in each repetitive process
Vision data, the selected dress ornament the dress ornament key point work as with the match information of the human body key point, described matched
One or more in the dress ornament model of preceding human body attitude is recorded to generate history and dress up record.
System above-mentioned of virtually dressing up, the system also includes display pattern judgment modules, are currently at for basis
Linear state or off-line state, and/or according to the user's choice, to judge display pattern;The display pattern includes the first display
Mode, the second display pattern, the one or more of third display pattern;The dress ornament display module include the first display unit,
One or more of second display unit, third display unit;First display unit is used for: if the display pattern is
The dress ornament rendering result is added to by the mirror by first display pattern then when showing the dress ornament rendering result
On the body image that son reflects, so that user sees oneself virtually dressed up from the mirror;Second display
Unit is used for: if the display pattern is second display pattern, when showing the dress ornament rendering result, by the clothes
Decorations rendering result be added to the history dress up record in the vision data in the body image on shown, with
Show the vision data for being superimposed effect of virtually dressing up;The third display unit is used for: if the display pattern is described
The dress ornament rendering result is added to real-time collected institute by three display patterns then when showing the dress ornament rendering result
It states and is shown on the body image in vision data, to show the vision data for being superimposed effect of virtually dressing up.
Also the following technical solution is employed for the purpose of the present invention to realize.Method proposed according to the present invention of virtually dressing up,
The following steps are included: providing dress ornament selection interface, the selected dress ornament of user is received as selected dress ornament;Vision data is acquired,
The vision data includes the body image reflected by mirror;It obtains being reflected by the mirror according to the vision data
Current human's posture of human body;Wherein, current human's posture includes the current pose information of multiple human body key points;It obtains
The dress ornament model of the selected dress ornament, the dress ornament model include multiple dress ornament key points, obtain the described of the selected dress ornament
The match information of dress ornament key point and the human body key point;According to the current pose information of the human body key point, Yi Jisuo
State selected dress ornament the dress ornament model and the match information, to determine that the dress ornament key point of the selected dress ornament is worked as
Preceding posture information, to obtain the dress ornament model for having matched current human's posture of the selected dress ornament;Worked as according to described matched
The dress ornament model of preceding human body attitude is rendered to obtain dress ornament rendering result;By the way that the dress ornament rendering result is added to user
The dress ornament rendering result is shown on the body image seen, to show effect of dressing up using augmented reality form.
The purpose of the present invention can also be further achieved by the following technical measures.
Method above-mentioned of virtually dressing up, the human body for obtaining being reflected by the mirror according to the vision data are worked as
Preceding human body attitude includes: to come out the mirror region segmentation in the vision data to obtain vision data in mirror, view in the mirror
Feel that data include the body image;Current human's posture is obtained according to vision data in the mirror.
Method above-mentioned of virtually dressing up, the acquisition vision data include acquisition two-dimensional visual data;It is described according to
Current human's posture that vision data obtains the human body reflected by the mirror includes: in the way of DensePose, according to institute
It states two-dimensional visual data and estimates to obtain more people's linear models of covering as current human's posture, including obtain described having
The current pose information of the human body key point of multiple three-dimensionals in more people's linear models of covering.
Above-mentioned virtually to dress up method, the multiple human body key point is head, neck, left shoulder, right shoulder, left large arm, right big
Arm, left forearm, right forearm, left hand, the right hand, chest, abdomen, left thigh, right thigh, left leg, right leg, left foot and right crus of diaphragm.
Method above-mentioned of virtually dressing up, the dress ornament model for obtaining the selected dress ornament include: to obtain the selected clothes
The dress ornament threedimensional model of decorations;The matching of the dress ornament key point and the human body key point for obtaining the selected dress ornament is believed
Breath includes: that the matching of the three-dimensional dress ornament key point and the three-dimensional human body key point that obtain the selected dress ornament is believed
Breath;The dress ornament model that current human's posture has been matched according to is rendered to obtain dress ornament rendering result to include: basis
The three-dimensional dress ornament model for having matched current human's posture is rendered to obtain the dress ornament two dimensional image of the selected dress ornament;
It is described to show the dress ornament rendering knot by the way that the dress ornament rendering result is added on the body image that user sees
Fruit includes: that the dress ornament two dimensional image is added on the body image that user sees to show the dress ornament X-Y scheme
Picture.
Method above-mentioned of virtually dressing up, the dress ornament model that current human's posture has been matched according to render
It include: to obtain current illumination condition to dress ornament rendering result;According to the dress ornament model for having matched current human's posture
When being rendered, the light source of virtual world is set according to the illumination condition dynamic.
It is above-mentioned virtually to dress up method, described by the way that the dress ornament rendering result is added to the people that user sees
Before the step of showing the dress ornament rendering result on body image, further includes: by using face information as calibration label,
To be calibrated to the position to be presented of the dress ornament rendering result;It is described by the way that the dress ornament rendering result is added to user
Show that the dress ornament rendering result includes: to incite somebody to action according to the position to be presented after calibration on the body image seen
The dress ornament rendering result after the calibration is added on the body image that user sees.
Method above-mentioned of virtually dressing up, it is described by being rendered to the dress ornament using face information as the label of calibration
As a result it includes: the facial key point for identifying the body image that position to be presented, which carries out calibration,;Show preset standard faces
Image, and prompt user to be moved with by user's face and the standard faces image alignment;According to the face identified
Portion's key point judge user's face whether with the standard faces image alignment;In the user's face and the standard faces figure
As alignment when, determine the coordinate mapping relations of camera coordinates system Yu displaing coordinate system, to according to the coordinate mapping relations come
Calibrate the position to be presented of the dress ornament rendering result.
Method above-mentioned of virtually dressing up, further includes: every a preset time interval, repeat the acquisition vision
Data show the dress ornament by the way that the dress ornament rendering result is added on the body image that user sees to described
Multiple steps of rendering result, to display in real time effect of dressing up;To the vision data in each repetitive process, the choosing
Determine the dress ornament key point of dress ornament and the match information of the human body key point, the dress ornament for having matched current human's posture
One or more in model is recorded to generate history and dress up record.
It is above-mentioned virtually to dress up method, the method also includes: according to be currently at presence or off-line state and/
Or according to the user's choice, judge display pattern, the display pattern includes the first display pattern, the second display pattern, the
The one or more of three display patterns;It is described by the way that the dress ornament rendering result is added to the body image that user sees
On show the dress ornament rendering result, including one or more steps below: if the display pattern is described first aobvious
Show mode, then when showing the dress ornament rendering result, the dress ornament rendering result is added to be reflected by the mirror
On the body image, so that user sees oneself virtually dressed up from the mirror;If the display pattern is described
Two display patterns dress up the dress ornament rendering result history that is added to note then when showing the dress ornament rendering result
It is shown on the body image in the vision data in record, to show the vision number for being superimposed effect of virtually dressing up
According to;If the display pattern is the third display pattern, when showing the dress ornament rendering result, the dress ornament is rendered
As a result it is shown on the body image being added in the real-time collected vision data, has been superimposed virtually with showing
The vision data for effect of dressing up.
Also the following technical solution is employed for the purpose of the present invention to realize.A kind of equipment proposed according to the present invention, comprising:
Memory, for storing non-transitory computer-readable instruction;And processor, for running the computer-readable instruction,
So that the computer-readable instruction realized when being executed by the processor it is above-mentioned virtually dress up method the step of.
Also the following technical solution is employed for the purpose of the present invention to realize.One kind proposed according to the present invention is computer-readable
Storage medium, for storing computer program, described program realizes that preceding method is implemented when by computer or processor execution
The step of example.
The present invention has obvious advantages and beneficial effects compared with the existing technology.By above-mentioned technical proposal, the present invention
It is proposed virtually dress up system, method, equipment and medium at least have following advantages and the utility model has the advantages that
(1) present invention realizes experience of virtually dressing up using augmented reality glasses and a common mirror, is using
Cheng Zhong, user only need to wear augmented reality glasses, and every one side general mirror at one's side can all become " virtual mirror ", virtual to try
Clothes body, which is tested, not to be limited by specially designed large screen, and flexibly and easily, user experience is good;
(2) present invention is by first coming out the mirror region segmentation in vision data, vision number in the mirror recycled
According to human body attitude identification is carried out, non-mirror subregion in vision data can be removed there are interference when personnel to virtually dressing up,
Be conducive to more accurately recognize user;
(3) present invention obtains current human's posture by estimating SMPL model in the way of DensePose, can be accurate
Ground determines current human's attitude data;
(4) present invention by manikin be arranged head, neck, left shoulder, right shoulder, left large arm, right large arm, left forearm,
18 human bodies such as right forearm, left hand, the right hand, chest, abdomen, left thigh, right thigh, left leg, right leg, left foot, right crus of diaphragm close
Key point, facilitate promoted human body attitude identification speed, and can while promoting human body attitude recognition speed accurately into
Row is virtually dressed up;
(5) for the present invention by the way that three-dimensional dress ornament is modeled as changing object, dress ornament can follow the variation of human body attitude to occur
Deformation generates preferable experience effect;
(6) present invention sets the light source of virtual world according to illumination condition dynamic, is able to reflect dress ornament in real scene
Color, generate true rendering effect, convenient for user carry out colour match;
(7) present invention realizes calibration by being demarcated using face information, can be more quasi- when showing virtual dress ornament
Really virtual dress ornament is added to user;
(8) present invention, can be according to online, offline different conditions or according to user by setting plurality of display modes
Selection, shown using various ways and virtually dressed up effect, flexibly and easily, user experience is good.
The above description is only an overview of the technical scheme of the present invention, in order to better understand technological means of the invention, and
It can be implemented in accordance with the contents of the specification, and to allow above and other objects, features and advantages of the invention can be brighter
Show understandable, it is special below to lift preferred embodiment, and cooperate attached drawing, detailed description are as follows.
Detailed description of the invention
Fig. 1 is the structural schematic diagram of the system of virtually dressing up of one embodiment of the invention;
Fig. 2 is the structural schematic diagram of the system of virtually dressing up of another embodiment of the present invention;
Fig. 3 is the flow diagram of the method for virtually dressing up of one embodiment of the invention;
Fig. 4 is the structural block diagram of the equipment of one embodiment of the invention.
Specific embodiment
It is of the invention to reach the technical means and efficacy that predetermined goal of the invention is taken further to illustrate, below in conjunction with
Attached drawing and preferred embodiment, to the specific embodiment party of virtually dress up system, method, equipment and medium proposed according to the present invention
Formula, structure, feature and its effect, detailed description is as follows.
Fig. 1 is the schematic diagram of 100 one embodiment of system of virtually dressing up of the invention, and Fig. 2 is void of the invention
The schematic diagram of quasi- 100 another embodiment of system of dressing up.Please refer to Fig. 1 or Fig. 2, the present invention is exemplary virtually to dress up and be
System 100 specifically includes that glasses 110.Glasses 110 include dress ornament selecting module 111, vision data acquisition module 112, gesture recognition
With locating module 113, dress ornament data acquisition module 114, dress ornament and human body attitude Fusion Module 115, rendering module 116, dress ornament
One or more of display module 117.In some instances, which is intelligent glasses.Optionally, the glasses 110
It is augmented reality glasses (also referred to as AR glasses, augmented reality goggles).The glasses 110 include memory and
Processor, dress ornament selecting module 111, vision data acquisition module 112, gesture recognition and locating module 113 above-mentioned, dress ornament number
According to one among acquisition module 114, dress ornament and human body attitude Fusion Module 115, rendering module 116, dress ornament display module 117
Or on multiple memories for being set to glasses 110.In use, it needs user's wearing spectacles 110 and stands before mirror.It should
Mirror includes the mirror surface that can reflect things.In general, which is general mirror.This is because taking the photograph on AR glasses
As head can not shoot the image of user itself, a mirror is needed so that the camera on AR glasses can collect user's
Posture.
The dress ornament selecting module 111 is used for: being provided dress ornament selection interface, is received that user is one selected, a kind of, more than one piece
Or the selected dress ornament of dress ornament conduct that multiclass is to be tried on, export the selected dress ornament.It may be noted that with no restrictions to the type of dress ornament, it can
To be packet, jacket, skirt etc..In some instances, offer dress ornament selection above-mentioned circle is provided using the screen of glasses 110
Face.
The vision data acquisition module 112 is used for: acquisition vision data exports the vision data.Wherein, the vision data
Image comprising the human body reflected by mirror.In some embodiments, which includes RGB image, the RGB image packet
Containing the user's human body RGB information reflected by mirror;The vision data acquisition module 112 includes RGB image catcher, for adopting
RGB image of the collection containing the user's human body RGB information reflected by mirror.Optionally, which includes
It is set to the camera of glasses 110.
The gesture recognition is used for locating module 113: being received vision data, is obtained being reflected by mirror according to the vision data
Current human's posture of human body out exports current human's posture.Wherein, which includes that multiple human bodies are crucial
The current pose information of point.Human body key point is referred to as human body key point or human body key position.In general,
It is user by the human body that mirror reflects.
The dress ornament data acquisition module 114 is used for: receiving selected dress ornament;Obtain the dress ornament model of the selected dress ornament, the clothes
Adoring model includes multiple dress ornament key points;Obtain the dress ornament key point of selected dress ornament and the match information of human body key point.One
In a optional example, which is had previously been stored in server-side, and the dress ornament data acquisition module 114 of glasses 110 obtains clothes
It is engaged in selecting the dress ornament model of dress ornament provided by end;In another optional example, which has previously been stored in glasses 110
Storage unit in, the dress ornament data acquisition module 114 of glasses 110 obtains the clothes of selected dress ornament by reading the storage unit
Adorn model.
The dress ornament is used for human body attitude Fusion Module 115: receiving the dress ornament mould of current human's posture and selected dress ornament
Type and match information are come according to the current pose information of human body key point and the dress ornament model and match information of selected dress ornament
The current pose information of the dress ornament key point of selected dress ornament is determined, to obtain the clothes for having matched current human's posture of selected dress ornament
Adorn model.
The rendering module 116 is used for: being rendered to obtain dress ornament wash with watercolours according to the dress ornament model for having matched current human's posture
Contaminate result.Optionally, which includes the dress ornament image for having matched current human's posture.
The dress ornament display module 117 is used for: display dress ornament rendering result.Specifically, by the way that dress ornament rendering result to be superimposed
The dress ornament rendering result is shown on the body image seen to user, to show effect of dressing up using augmented reality form.
The body image reflected by mirror might as well be known as the virtual image.In some instances, it is realized using the screen of glasses 110 aforementioned
The displaying dress ornament rendering result, which is added on the body image reflected by mirror.
The exemplary system 100 of virtually dressing up of the present invention, realizes void using augmented reality glasses and a common mirror
Quasi- experience of dressing up, in use, user only need to wear augmented reality glasses, and every one side general mirror at one's side can all become
" virtual mirror ", flexibly and easily, user experience are good.
In some embodiments, gesture recognition and locating module 113 include mirror identification positioning unit and gesture recognition with
Positioning unit.Mirror identification positioning unit is used for: being received vision data, location model is identified using mirror, by vision data
In the region segmentation of mirror come out and obtain vision data in mirror.Wherein, vision data includes the people reflected by mirror in mirror
Body image.The gesture recognition is used for positioning unit: current human's posture is obtained according to vision data in mirror.Utilize the present invention
Exemplary virtually to dress up system 100, there are dry to what is virtually dressed up when personnel for the non-mirror subregion that can remove in vision data
It disturbs, is conducive to more accurately recognize user.
In some embodiments, collected vision data is two dimensional image, such as two-dimensional RGB image, and according to this
Two dimensional image identifies three-dimensional human body attitude.Optionally, vision data acquisition module 112 is specifically used for acquisition two-dimensional visual number
According to.Gesture recognition is specifically used for locating module 113: in the way of intensive posture (DensePose), according to the two-dimensional visual number
More people's linear models (A Skinned Multi-Person the Linear Model, referred to as SMPL of covering are obtained according to estimates
Model), as current human's posture.Optionally, the SMPL model above-mentioned that obtains includes: to obtain multiple three-dimensionals in SMPL model
Human body key point current pose information.
Wherein, DensePose above-mentioned is a kind of human body attitude estimation technique, and the human pixels in two dimensional image are reflected
Be mapped to 3 D human body surface, and intensive coordinate handled with the speed of multiframe per second, finally realize dynamic personage accurate positioning and
Attitude estimation.SMPL model above-mentioned is a kind of parameterized human body model, comprising a variety of for describing the parameter of human body.Specific packet
It includes for indicating the multiple parameters such as fat or thin, the head body ratio of human body height, and represent human body mass motion pose for expression
With the multiple parameters such as the relative angle of 24 human body key points.
The present invention obtains current human's posture by estimating SMPL model in the way of DensePose, can be accurately
Obtain current human's attitude data.
It may be noted that not only limit uses DensePose mode to carry out gesture recognition to the present invention, also not only limit uses SMPL
Manikin characterizes human body, it is proposed by the present invention virtually dress up system 100 can also using other gesture recognition modes or
It is realized using other manikins.
It may be noted that the present invention to the type of the posture information of human body key point with no restrictions, can using various ways come
The posture information of human body key point is showed, for example, the position coordinates under can use plane right-angle coordinate close to show human body
The posture information of key point is also possible to relative angle and relative distance between multiple key points to show the appearance of human body key point
State information.
In some embodiments, 24 human body key points commonly used by SMPL model are not used, but use less people
Body key point carries out the speed of human body attitude estimation to be promoted in the way of DensePose.At the same time, it must also be considered that virtual dress
Human body key point cannot be subtracted very few and influence effect of virtually dressing up by the characteristics of playing the part of.Specifically, human body key point of the invention
It include: head, neck, left shoulder, right shoulder, left large arm, right large arm, left forearm, right forearm, left hand, the right hand, chest, abdomen, Zuo great
Leg, right thigh, left leg, right leg, left foot, right crus of diaphragm.The exemplary system 100 of virtually dressing up of the present invention, by manikin
Middle setting 18 human body key points above-mentioned help to promote the speed for carrying out gesture recognition using DensePose model, and energy
It is enough accurately virtually to be dressed up while promoting human body attitude estimating speed.
It may be noted that the mode of 18 human body key points above-mentioned proposed by the present invention is not limited only to SMPL mould above-mentioned
Which can also be applied among the embodiment of the invention using the system 100 of virtually dressing up of other manikins by type.
In some embodiments, dress ornament data acquisition module 114 is specifically used for: obtaining the dress ornament three-dimensional mould of selected dress ornament
Type obtains the three-dimensional dress ornament key point of selected dress ornament and the match information of three-dimensional human body key point.Meanwhile dress ornament and human body
The dress ornament model generated for having matched current human's posture of posture Fusion Module 115 is also three-dimensional.Rendering module 116 is specific
For: it is rendered according to the three-dimensional dress ornament model for having matched current human's posture, obtains the dress ornament X-Y scheme of selected dress ornament
Picture.Dress ornament display module 117 is specifically used for: being shown by the way that dress ornament two dimensional image is added in itself virtual image that user sees
The dress ornament two dimensional image.The exemplary system 100 of virtually dressing up of the present invention, is modeled as changing object for three-dimensional dress ornament, takes in this way
Decorations can follow the variation of human body attitude, and deformation occurs, generates preferable experience effect.
In some embodiments, dress ornament rendering module 116 includes lighting simulation unit.The lighting simulation unit is used for: being obtained
Take current illumination condition;And when being rendered according to the dress ornament model for having matched current human's posture, according to illumination item
The light source of part dynamic setting virtual world generates true rendering effect, just to reflect color of the dress ornament in real scene
Colour match is carried out in user.It may be noted that can realize the current illumination condition of acquisition above-mentioned using various ways.One
In a embodiment, current illumination condition is acquired in real time using sensor;In another embodiment, current time is obtained, according to
The corresponding relationship of preset time and illumination determines the illumination condition of current time;In another embodiment, it adopts simultaneously
Illumination condition is obtained with various ways, and comprehensively considers the illumination condition that various ways obtain to determine current illumination condition.
In some embodiments, system 100 of virtually dressing up of the invention further includes calibration module 118.Calibration module 118 is used
In the dress ornament two dimensional image that calibration is generated by dress ornament rendering module 116, so that dress ornament display module 117 shows the dress ornament after calibration
Two dimensional image.The calibration refers specifically to: the camera coordinates system of real world coordinates system and glasses 110 where unification user.
General calibrating mode includes being calibrated using the label (marker) of a 2d.
Further, in some embodiments, calibration module 118 is specifically used for: being demarcated, is passed through using face information
Using face information as the label (marker) of calibration, to be calibrated to the position to be presented of dress ornament rendering result.And dress ornament
Display module 117 is specifically used for: according to the position to be presented after the calibration of dress ornament rendering result, the dress ornament after calibration being rendered knot
Fruit is added on the body image that user sees.
As an optional specific embodiment, calibration module 118 is specifically included with lower unit:
Face key point recognition unit, the facial key point of the body image in vision data for identification, such as eyebrow,
Nose, mouth;
Standard faces display unit, for showing preset standard faces image using glasses 110, such as in AR glasses screen
A standard faces image is shown on curtain, and prompts user to be moved with by user's face (face i.e. in mirror) and standard people
Face image alignment;
Face is aligned judging unit, for according to the facial key point that identifies judge user's face whether with standard faces
Image alignment;It may be noted that not necessarily perfectly aligned, error is in preset threshold value;
Calibration unit, for determining camera coordinates system and displaing coordinate in user's face and standard faces image alignment
The coordinate mapping relations of system, to calibrate the position to be presented of dress ornament rendering result according to coordinate mapping relations.So as to
Dress ornament rendering result on 110 screen of glasses is accurately added on the body image reflected by mirror.
The exemplary system 100 of virtually dressing up of the present invention, by realizing calibration, energy using being demarcated using face information
It is enough that virtual information on 110 screen of glasses is accurately added to the real world that user sees, to allow user to generate AR sense.
In some embodiments, the exemplary system 100 of virtually dressing up of the present invention further includes database 120, utilizes database
120 obtain dress ornament data.Specifically, database 120 is used to be stored with the dress ornament of multiple optional dress ornaments for user's selection
One or more dress ornament key points and one or more human body key points in the dress ornament model of model and these optional dress ornaments
Match information.Dress ornament data acquisition module 114 is specifically used for: recalled from database 120 selected dress ornament dress ornament model and
The dress ornament key point of selected dress ornament and the match information of human body key point.
It should be noted that database 120 can be realized by the memory being set in glasses 110.Alternatively, as schemed
Shown in 2, database 120 can also be not provided in glasses 110, but be set to server-side, and pass through dress ornament data acquisition
The interaction of module 114 and server-side obtains dress ornament data.
Further, the exemplary system 100 of virtually dressing up of the present invention further includes Input of Data module 121.The database
Recording module 121 is used for: the dress ornament model of the optional dress ornament selected for user is received in advance, by the dress ornament model of optional dress ornament
In one or more dress ornament key points and one or more human body key points match to obtain match information, and input database
120。
In some embodiments, the exemplary system 100 of virtually dressing up of the present invention further includes dress ornament effect memorandum module
119.Dress ornament effect memorandum module 119 is used for: every a preset time interval, starting is virtually dressed up before system 100
The multiple modules and unit stated, such as starting vision data acquisition module 112, gesture recognition and locating module 113, dress ornament data
Obtain module 114, dress ornament and human body attitude Fusion Module 115, rendering module 116 and dress ornament display module 117, to repeat into
Row acquisition vision data above-mentioned is to the multiple steps for showing dress ornament rendering result, to display in real time effect of dressing up.
Further, in some embodiments, dress ornament effect memorandum module 119 is also used to: in each repetitive process
Virtually information of dressing up recorded, dressed up record with generating user's history.The virtual fitting information includes vision data, selectes
The dress ornament key point of dress ornament and the match information of human body key point, matched one in the dress ornament model of current human's posture or
It is multiple.So as to user while more different dress ornament effects.
In some embodiments, the exemplary system 100 of virtually dressing up of the present invention further includes display pattern judgment module (in figure
It is not shown), for according to presence or off-line state, and/or according to the user's choice is currently at, to judge to show mould
Formula.The display pattern includes the one or more of the first display pattern, the second display pattern, third display pattern.Dress ornament is shown
Module 117 includes one or more of the first display unit, the second display unit, third display unit.
First display unit is used for:, will when showing dress ornament rendering result if display pattern is the first display pattern
Dress ornament rendering result is added on the body image reflected by mirror, so that user sees oneself virtually to dress up from mirror
Oneself.
Second display unit is used for:, will when showing dress ornament rendering result if display pattern is the second display pattern
Dress ornament rendering result be added to history dress up record in vision data in body image on shown, with show be superimposed
The vision data for effect of virtually dressing up.
The third display unit is used for:, will when showing dress ornament rendering result if display pattern is third display pattern
It is shown on the body image that dress ornament rendering result is added in real-time collected vision data, has been superimposed virtually with showing
The vision data for effect of dressing up.
As an optional specific example, display pattern judgment module is specifically used for: according to being currently at presence
Or off-line state judges that display pattern is the first display pattern or the second display pattern;Before dress ornament display module 117 includes
The first display unit and the second display unit stated.First display unit is used for: aobvious using first if being online
Show mode, when showing dress ornament rendering result, dress ornament rendering result is added on the body image reflected by mirror.Online
Experience in, user it is seen that: the dress ornament for the rendering that oneself and glasses 110 in mirror are shown.Second display unit
For: if being in off-line state, dress ornament rendering result is folded when showing dress ornament rendering result using the second display pattern
Be added to history dress up record in vision data in body image on shown.In offline experience, user it is seen that:
The dress ornament for the rendering that the video and glasses 110 of the user for the fitting that camera is shot in glasses 110 is shown.
It may be noted that in some embodiments, glasses 110 need not include dress ornament selecting module 111, vision data acquisition module
112, gesture recognition and locating module 113, dress ornament data acquisition module 114, dress ornament and human body attitude Fusion Module 115, rendering
Whole among module 116, dress ornament display module 117 can also only include some of which.And other modules can be set
In server-side, or can be set in another equipment.For example, the exemplary system 100 of virtually dressing up of the present invention further includes aforementioned
Mirror, which is the Intelligent mirror comprising memory and processor, other modules above-mentioned are set in the mirror.
Fig. 3 is the schematic flow block diagram of method one embodiment of virtually dressing up of the invention.Referring to Fig. 3, of the invention
Exemplary method of virtually dressing up mainly comprises the steps that
Step S11 provides dress ornament selection interface, and it is to be tried on to receive one selected user, a kind of, more than one piece or multiclass
Dress ornament is as selected dress ornament.It may be noted that with no restrictions to the type of dress ornament, can be packet, jacket, skirt etc..
Step S12 acquires vision data.Wherein, which includes the image of the human body reflected by mirror.The mirror
Attached bag contains the mirror surface that can reflect things.In general, which is general mirror.Optionally, which includes
RGB image, the RGB image include the user's human body RGB information reflected by mirror.
Step S13, according to the current human's posture for the human body that the vision data obtains being reflected by mirror.Wherein, currently
Human body attitude includes the current pose information of multiple human body key points.Human body key point be referred to as human body key point,
Or human body key position.
Step S14 obtains the dress ornament model of selected dress ornament, wherein dress ornament model includes multiple dress ornament key points.Obtain choosing
Determine the dress ornament key point of dress ornament and the match information of human body key point.
Step S15, it is current according to this of the dress ornament model, the match information and human body key point of selecting dress ornament
Posture information, to determine that this selectes the current pose information of the dress ornament key point of dress ornament, to obtain the matching of the selected dress ornament
The dress ornament model of current human's posture.
Step S16 is rendered to obtain dress ornament rendering result according to the dress ornament model for having matched current human's posture.It is optional
Ground, the dress ornament rendering result include the dress ornament image for having matched current human's posture.
Step S17 shows dress ornament rendering result.Specifically, by the way that dress ornament rendering result is added to the people that user sees
The dress ornament rendering result is shown on body image, to show effect of dressing up using augmented reality form.
In some embodiments, aforementioned step S13 is specifically included: location model is identified using mirror, by vision data
In mirror region segmentation come out and obtain vision data in mirror, wherein vision data includes the people reflected by mirror in the mirror
Body image;And current human's posture is obtained according to vision data in mirror.Virtually dress up method using the present invention is exemplary,
The non-mirror subregion in vision data can be removed there are interference when personnel to virtually dressing up, be conducive to more accurately recognize
User.
In some embodiments, collected vision data is two dimensional image, such as two-dimensional RGB image, and according to this
Two dimensional image identifies three-dimensional human body attitude.Optionally, aforementioned step S12 includes: acquisition two-dimensional visual data.It is above-mentioned
Step S13 includes: to estimate to obtain the more of covering according to the two-dimensional visual data in the way of intensive posture (DensePose)
People's linear model (A Skinned Multi-Person Linear Model, referred to as SMPL model) is used as current human's appearance
State.Optionally, the SMPL model above-mentioned that obtains includes: to obtain the current appearance of the human body key point of multiple three-dimensionals in SMPL model
State information.
Wherein, DensePose above-mentioned is a kind of human body attitude estimation technique, and the human pixels in two dimensional image are reflected
It is mapped to 3 D human body surface, along with intensive coordinate is handled with the speed of multiframe per second, finally realizes that dynamic personage's is accurate fixed
Position and Attitude estimation.SMPL model above-mentioned is a kind of parameterized human body model, comprising a variety of for describing the parameter of human body.Tool
Body includes representing human body mass motion for indicating the multiple parameters such as fat or thin, the head body ratio of human body height, and for expression
The multiple parameters such as pose and the relative angle of 24 human body key points.
It may be noted that not only limit uses DensePose mode to carry out gesture recognition to the present invention, also not only limit uses SMPL
Manikin characterizes human body, can also carry out using other gesture recognition modes or using other manikins this
The virtual method of dressing up that invention proposes.
In some embodiments, 24 human body key points commonly used by SMPL model are not used, but use less people
Body key point carries out the speed of human body attitude estimation to be promoted in the way of DensePose.At the same time, it must also be considered that virtual dress
Human body key point cannot be subtracted very few and influence effect of virtually dressing up by the characteristics of playing the part of.Specifically, human body key point of the invention
It include: head, neck, left shoulder, right shoulder, left large arm, right large arm, left forearm, right forearm, left hand, the right hand, chest, abdomen, Zuo great
Leg, right thigh, left leg, right leg, left foot, right crus of diaphragm.The exemplary method of virtually dressing up of the present invention, by being set in manikin
18 human body key points above-mentioned are set, help to promote the speed using DensePose model progress gesture recognition, and can be
Accurately virtually dressed up while promoting human body attitude estimating speed.
It may be noted that the mode of 18 human body key points above-mentioned proposed by the present invention is not limited only to SMPL mould above-mentioned
Which can also be applied among the embodiment of the invention using the method for virtually dressing up of other manikins by type.
In some embodiments, the dress ornament model of the selected dress ornament of acquisition in aforementioned step S14 specifically includes: obtaining choosing
Determine the dress ornament threedimensional model of dress ornament.The matching of the dress ornament key point and human body key point of the selected dress ornament of acquisition in step S14
Information specifically includes: obtaining the three-dimensional dress ornament key point of selected dress ornament and the match information of three-dimensional human body key point.Meanwhile
It is also three-dimensional that the dress ornament model of current human's posture has been matched generated in step S15.Aforementioned step S16 is specifically wrapped
It includes: being rendered according to the three-dimensional dress ornament model for having matched current human's posture, obtain the dress ornament two dimensional image of selected dress ornament.
Also, aforementioned step S17 is specifically included: being shown by the way that dress ornament two dimensional image is added on the body image that user sees
The dress ornament two dimensional image.The exemplary method of virtually dressing up of the present invention, is modeled as changing object, such dress ornament meeting for three-dimensional dress ornament
Following the variation of human body attitude, deformation occurs, generates preferable experience effect.
In some embodiments, aforementioned step S16 is specifically included: obtaining current illumination condition;And according to
When the dress ornament model of matching current human's posture renders, the light source of virtual world is dynamically set according to the illumination condition,
To reflect color of the dress ornament in real scene, true rendering effect is generated, carries out colour match convenient for user.It needs to infuse
Meaning can realize the current illumination condition of acquisition above-mentioned using various ways.In one embodiment, sensor reality is utilized
When acquire current illumination condition;In another embodiment, current time is obtained, according to pair of preset time and illumination
It should be related to the illumination condition to determine current time;In another embodiment, while using various ways illumination condition is obtained, and
The illumination condition that various ways obtain is comprehensively considered to determine current illumination condition.
In some embodiments, before aforementioned step S17, further includes: demarcated using face information, pass through by
Label (marker) of the face information as calibration, to be calibrated to the position to be presented of dress ornament rendering result.And the step
S17 is specifically included: according to the position to be presented after the calibration of dress ornament rendering result, the dress ornament rendering result after calibration being added to
On the body image that user sees.
As an optional specific embodiment, it is above-mentioned by using face information as calibration label, to dress ornament
The position to be presented of rendering result calibrated specifically includes the following steps:
Identify the facial key point of the body image in vision data;
It shows preset standard faces image, and prompts user to be moved with by user's face and standard faces image pair
Together;
According to the facial key point identified judge user's face whether with standard faces image alignment;
And in user's face and standard faces image alignment, determine that camera coordinates system and the coordinate of displaing coordinate system are reflected
Relationship is penetrated, to calibrate the position to be presented of dress ornament rendering result according to coordinate mapping relations.
In some embodiments, dress ornament data are obtained using database.Specifically, aforementioned step S14 is specifically included:
The matching that the dress ornament model of selected dress ornament and the dress ornament key point of selected dress ornament and human body key point are recalled from database is believed
Breath.Wherein, which has the dress ornament model of multiple optional dress ornaments for user's selection and the dress ornament of optional dress ornament
The match information of one or more dress ornament key points and one or more human body key points in model.
Further, the exemplary method of virtually dressing up of the present invention further include: receive the optional clothes selected for user in advance
The dress ornament model of decorations, by the one or more dress ornament key points and one or more human body keys in the dress ornament model of optional dress ornament
Point matches to obtain match information, and input database.
In some embodiments, the present invention is exemplary virtually dresss up method further include: every a preset time interval,
Multiple steps of abovementioned steps S12 to abovementioned steps S17 are repeated, to display in real time effect of dressing up.
Further, in some embodiments, the exemplary method of virtually dressing up of the present invention further include: repeat to walk above-mentioned
During rapid S12 to step S17, the information of virtually dressing up in each repetitive process is recorded, to generate user's history
Dress up record.The virtual fitting information includes that the matching of the dress ornament key point and human body key point of vision data, selected dress ornament is believed
Breath, one or more of the dress ornament model for having matched current human's posture.So as to user while more different dress ornament effects.
In some embodiments, the present invention is exemplary virtually dresss up method further include: according to be currently at presence or
Off-line state, and/or according to the user's choice, to judge display pattern.The display pattern is shown including the first display pattern, second
Show the one or more of mode, third display pattern.Aforementioned step S17 includes one or more steps below:
If display pattern is the first display pattern, when showing dress ornament rendering result, dress ornament rendering result is added to
On the body image reflected by mirror, so that user sees oneself virtually dressed up from mirror;
If display pattern is the second display pattern, when showing dress ornament rendering result, dress ornament rendering result is added to
History, which is dressed up, to be shown on the body image in the vision data in record, has been superimposed the vision of effect of virtually dressing up to show
Data;
If display pattern is third display pattern, when showing dress ornament rendering result, dress ornament rendering result is added to
It is shown on body image in real-time collected vision data, to show the vision number for being superimposed effect of virtually dressing up
According to.
As an optional specific example, the step of judgement display pattern above-mentioned, is specifically included: according to being currently at
Presence or off-line state judge that display pattern is the first display pattern or the second display pattern.Aforementioned step S17
It specifically includes: if being online, using the first display pattern, when showing dress ornament rendering result, dress ornament is rendered and is tied
Fruit is added on the body image reflected by mirror;If being in off-line state, the second display pattern is used, in display dress ornament
When rendering result, by dress ornament rendering result be added to history dress up record in vision data in body image on open up
Show.In online experience, user it is seen that: the dress ornament for the rendering that oneself and AR glasses in mirror are shown.Offline body
In testing, user it is seen that: in AR glasses camera shoot fitting user video and AR glasses show in rendering
Dress ornament.
Fig. 4 is the hardware block diagram for illustrating equipment according to an embodiment of the invention.As shown in figure 4, according to the present invention
The equipment 200 of embodiment includes memory 201 and processor 202.Each component in the equipment 200 by bus system and/or
Bindiny mechanism's (not shown) of other forms interconnects.Equipment 200 of the invention can be implemented in a variety of manners, including but unlimited
In such as augmented reality glasses (or being AR glasses, intelligent glasses) or other augmented reality equipment (or being AR equipment),
Virtual reality device (or being VR equipment), smartwatch, smart phone, laptop, digit broadcasting receiver, PDA are (a
Personal digital assistant), it is PAD (tablet computer), PMP (portable media player), navigation device, vehicle-mounted terminal equipment, vehicle-mounted
The fixation of the mobile terminal device of display terminal, vehicle electronics rearview mirror etc. and such as number TV, desktop computer etc.
Terminal device.
The memory 201 is for storing non-transitory computer-readable instruction.Specifically, memory 201 may include one
A or multiple computer program products, the computer program product may include various forms of computer readable storage mediums,
Such as volatile memory and/or nonvolatile memory.The volatile memory for example may include random access memory
(RAM) and/or cache memory (cache) etc..The nonvolatile memory for example may include read-only memory
(ROM), hard disk, flash memory etc..
The processor 202 can be central processing unit (CPU) or have data-handling capacity and/or instruction execution energy
The processing unit of the other forms of power, and can control other components in the equipment 200 to execute desired function.At this
In one embodiment of invention, which makes for running the computer-readable instruction stored in the memory 201
Obtain all or part of the steps that the equipment 200 executes the method for virtually dressing up of various embodiments of the present invention above-mentioned.
In some embodiments, the equipment of the embodiment of the present invention 200 is augmented reality glasses.
The embodiment of the present invention also provides a kind of computer readable storage medium, for storing computer program, described program
By a computer or processor execution when realize it is described virtually dress up method the step of.
The above described is only a preferred embodiment of the present invention, limitation in any form not is done to the present invention, though
So the present invention has been disclosed as a preferred embodiment, and however, it is not intended to limit the invention, any technology people for being familiar with this profession
Member, without departing from the scope of the present invention, when the technology contents using the disclosure above are modified or are modified
For the equivalent embodiment of equivalent variations, but anything that does not depart from the technical scheme of the invention content, according to the technical essence of the invention
Any simple modification, equivalent change and modification made to the above embodiment, all of which are still within the scope of the technical scheme of the invention.
Claims (23)
1. a kind of system of virtually dressing up, which is characterized in that the system comprises:
Glasses include memory and processor;
Dress ornament selecting module receives the selected dress ornament of user as selected dress ornament for providing dress ornament selection interface;
Vision data acquisition module, for acquiring vision data, the vision data includes the body image reflected by mirror;
Gesture recognition and locating module, human body for obtaining being reflected by the mirror according to the vision data work as forefathers
Body posture;Wherein, current human's posture includes the current pose information of multiple human body key points;
Dress ornament data acquisition module, for obtaining the dress ornament model of the selected dress ornament, the dress ornament model includes multiple dress ornaments
Key point obtains the dress ornament key point of the selected dress ornament and the match information of the human body key point;
Dress ornament and human body attitude Fusion Module, for according to the current pose information of the human body key point and described selected
The dress ornament model of dress ornament and the match information, come determine the selected dress ornament the dress ornament key point current pose
Information, to obtain the dress ornament model for having matched current human's posture of the selected dress ornament;
Rendering module obtains dress ornament rendering knot for being rendered according to the dress ornament model for having matched current human's posture
Fruit;
Dress ornament display module, for being shown by the way that the dress ornament rendering result is added on the body image that user sees
Show the dress ornament rendering result, to show effect of dressing up using augmented reality form;
Wherein, the dress ornament selecting module, the vision data acquisition module, the gesture recognition and locating module, the clothes
Adorn data acquisition module, the dress ornament and human body attitude Fusion Module, the rendering module, among the dress ornament display module
One or more is set on the memory of the glasses.
2. system according to claim 1 of virtually dressing up, which is characterized in that
The gesture recognition and locating module include mirror identification positioning unit and gesture recognition and positioning unit;
The mirror identification positioning unit, which is used to come out the region segmentation of the mirror in the vision data, obtains vision in mirror
Data, vision data includes the body image in the mirror;
The gesture recognition and positioning unit are for obtaining current human's posture according to vision data in the mirror.
3. system according to claim 1 of virtually dressing up, which is characterized in that
The vision data acquisition module is specifically used for acquisition two-dimensional visual data;
The gesture recognition is specifically used for locating module: in the way of DensePose, being estimated according to the two-dimensional visual data
More people's linear models of covering are obtained as current human's posture, including obtaining the more people's linear models for having covering
In multiple three-dimensionals the human body key point the current pose information.
4. according to claim 1 virtually dress up system, which is characterized in that the multiple human body key point be head, neck,
Left shoulder, right shoulder, left large arm, right large arm, left forearm, right forearm, left hand, the right hand, chest, abdomen, left thigh, right thigh, a left side are small
Leg, right leg, left foot and right crus of diaphragm.
5. system according to claim 3 of virtually dressing up, it is characterised in that:
The dress ornament data acquisition module is specifically used for: obtaining the dress ornament threedimensional model of the selected dress ornament, obtains described selected
The three-dimensional dress ornament key point of dress ornament and the match information of the three-dimensional human body key point;
The rendering module is specifically used for: the dress ornament model that current human's posture has been matched according to three-dimensional render
To the dress ornament two dimensional image of the selected dress ornament;
The dress ornament display module is specifically used for: by the way that the dress ornament two dimensional image is added to the human body shadow that user sees
The dress ornament two dimensional image is shown as on.
6. system according to claim 1 of virtually dressing up, which is characterized in that the dress ornament rendering module includes lighting simulation
Unit is used for:
Obtain current illumination condition;
When being rendered according to the dress ornament model for having matched current human's posture, set according to the illumination condition dynamic
The light source of virtual world.
7. system according to claim 1 of virtually dressing up, which is characterized in that
The system also includes calibration modules, for by being rendered to the dress ornament using face information as the label of calibration
As a result it is calibrated position to be presented;
The dress ornament display module is specifically used for: according to the position to be presented after calibration, by the dress ornament wash with watercolours after calibration
Dye result is added on the body image that user sees.
8. according to claim 7 virtually dress up system, which is characterized in that the calibration module includes:
Face key point recognition unit, for identification facial key point of the body image;
Standard faces display unit for showing preset standard faces image using the glasses, and prompts user to move
It is dynamic with by user's face and the standard faces image alignment;
Face is aligned judging unit, for according to the facial key point identified judge user's face whether with the standard
Facial image alignment;
Calibration unit, for determining camera coordinates system and display in the user's face and the standard faces image alignment
The coordinate mapping relations of coordinate system, to calibrate the position to be presented of the dress ornament rendering result according to the coordinate mapping relations
It sets.
9. system according to claim 1 of virtually dressing up, which is characterized in that
The system also includes databases, for storing the dress ornament model of multiple optional dress ornaments and the clothes of the optional dress ornament
Adorn the match information of the one or more dress ornament key points and one or more human body key points in model;
The dress ornament data acquisition module is specifically used for: recalling the dress ornament mould of the selected dress ornament from the database
The match information of the dress ornament key point of type and the selected dress ornament and the human body key point;
The system also includes Input of Data modules, are used for: the dress ornament model of the optional dress ornament is received in advance, it can by described in
One or more dress ornament key points and human body key point in the dress ornament model of dress ornament are selected to match to obtain match information, and typing
The database.
10. according to claim 1 to system of virtually dressing up described in any one of 9, which is characterized in that further include dress ornament effect
Memorandum module, is used for:
Every a preset time interval, the dress ornament selecting module, the vision data acquisition module, the posture are utilized
Identification with locating module, the dress ornament data acquisition module, the dress ornament and human body attitude Fusion Module, the rendering module and
The dress ornament display module repeats the acquisition vision data to described by the way that the dress ornament rendering result is added to use
Multiple steps of the dress ornament rendering result are shown on the body image that family is seen, to display in real time effect of dressing up;
The dress ornament key point and the human body to the vision data, the selected dress ornament in each repetitive process is crucial
The match information of point, the one or more in the dress ornament model for having matched current human's posture are recorded to generate history
Dress up record.
11. system according to claim 10 of virtually dressing up, it is characterised in that:
The system also includes display pattern judgment modules, are currently at presence or off-line state, and/or root for basis
According to the selection of user, to judge display pattern;The display pattern is aobvious including the first display pattern, the second display pattern, third
Show the one or more of mode;
The dress ornament display module includes one or more of the first display unit, the second display unit, third display unit;
First display unit is used for: if the display pattern is first display pattern, showing the dress ornament wash with watercolours
Contaminate result when, the dress ornament rendering result is added on the body image reflected by the mirror, so as to user from
Oneself virtually dressed up is seen in the mirror;
Second display unit is used for: if the display pattern is second display pattern, showing the dress ornament wash with watercolours
Contaminate result when, by the dress ornament rendering result be added to the history dress up record in the vision data in the human body
It is shown on image, to show the vision data for being superimposed effect of virtually dressing up;
The third display unit is used for: if the display pattern is the third display pattern, showing the dress ornament wash with watercolours
When contaminating result, the body image that the dress ornament rendering result is added in the real-time collected vision data is enterprising
Row display, to show the vision data for being superimposed effect of virtually dressing up.
12. a kind of method of virtually dressing up, which is characterized in that the described method comprises the following steps:
Dress ornament selection interface is provided, receives the selected dress ornament of user as selected dress ornament;
Vision data is acquired, the vision data includes the body image reflected by mirror;
According to the current human's posture for the human body that the vision data obtains being reflected by the mirror;Wherein, described to work as forefathers
Body posture includes the current pose information of multiple human body key points;
The dress ornament model of the selected dress ornament is obtained, the dress ornament model includes multiple dress ornament key points, obtains the selected clothes
The dress ornament key point of decorations and the match information of the human body key point;
According to the dress ornament model and the matching of the current pose information of the human body key point and the selected dress ornament
Information, come determine the selected dress ornament the dress ornament key point current pose information, to have obtained the selected dress ornament
Match the dress ornament model of current human's posture;
It is rendered to obtain dress ornament rendering result according to the dress ornament model for having matched current human's posture;
The dress ornament rendering knot is shown by the way that the dress ornament rendering result is added on the body image that user sees
Fruit, to show effect of dressing up using augmented reality form.
13. according to claim 12 virtually dress up method, which is characterized in that it is described according to the vision data obtain by
The current human's posture for the human body that the mirror reflects includes:
Mirror region segmentation in the vision data is come out and obtains vision data in mirror, vision data includes institute in the mirror
State body image;
Current human's posture is obtained according to vision data in the mirror.
14. method according to claim 12 of virtually dressing up, which is characterized in that
The acquisition vision data includes acquisition two-dimensional visual data;
Current human's posture of the human body for obtaining being reflected by the mirror according to the vision data includes: to utilize
DensePose mode, the more people's linear models for estimating to obtain covering according to the two-dimensional visual data work as forefathers as described
Body posture, including obtaining the described current of the human body key point for having multiple three-dimensionals in more people's linear models of covering
Posture information.
15. method according to claim 12 of virtually dressing up, which is characterized in that the multiple human body key point is head, neck
Portion, left shoulder, right shoulder, left large arm, right large arm, left forearm, right forearm, left hand, the right hand, chest, abdomen, left thigh, right thigh, a left side
Shank, right leg, left foot and right crus of diaphragm.
16. method according to claim 14 of virtually dressing up, which is characterized in that
The dress ornament model for obtaining the selected dress ornament includes: the dress ornament threedimensional model for obtaining the selected dress ornament;
The match information of the dress ornament key point for obtaining the selected dress ornament and the human body key point includes: to obtain institute
State the three-dimensional dress ornament key point of selected dress ornament and the match information of the three-dimensional human body key point;
The dress ornament model that current human's posture has been matched according to is rendered to obtain dress ornament rendering result to include: basis
The three-dimensional dress ornament model for having matched current human's posture is rendered to obtain the dress ornament two dimensional image of the selected dress ornament;
It is described to show the dress ornament wash with watercolours by the way that the dress ornament rendering result is added on the body image that user sees
Dye result includes: that the dress ornament two dimensional image is added on the body image that user sees to show the dress ornament two dimension
Image.
17. method according to claim 12 of virtually dressing up, which is characterized in that described to have matched current human according to
The dress ornament model of posture is rendered to obtain dress ornament rendering result
Obtain current illumination condition;
When being rendered according to the dress ornament model for having matched current human's posture, set according to the illumination condition dynamic
The light source of virtual world.
18. method according to claim 12 of virtually dressing up, which is characterized in that
The dress ornament is shown by the way that the dress ornament rendering result is added on the body image that user sees described
Before the step of rendering result, further includes: by using face information as calibration label, to the dress ornament rendering result
It is calibrated position to be presented;
It is described to show the dress ornament wash with watercolours by the way that the dress ornament rendering result is added on the body image that user sees
Dye result includes: that the dress ornament rendering result after the calibration is added to use according to the position to be presented after calibration
On the body image that family is seen.
19. method according to claim 18 of virtually dressing up, which is characterized in that described by using face information as calibration
Label, come to the position to be presented of the dress ornament rendering result carry out calibration include:
Identify the facial key point of the body image;
It shows preset standard faces image, and prompts user to be moved with by user's face and the standard faces image pair
Together;
According to the facial key point identified judge user's face whether with the standard faces image alignment;
In the user's face and the standard faces image alignment, determine that camera coordinates system and the coordinate of displaing coordinate system are reflected
Relationship is penetrated, to calibrate the position to be presented of the dress ornament rendering result according to the coordinate mapping relations.
20. method of virtually dressing up described in any one of 2 to 19 according to claim 1, which is characterized in that further include:
Every a preset time interval, repeats the acquisition vision data and tied to described by rendering the dress ornament
Fruit, which is added on the body image that user sees, shows multiple steps of the dress ornament rendering result, to display in real time
Dress up effect;
The dress ornament key point and the human body to the vision data, the selected dress ornament in each repetitive process is crucial
The match information of point, the one or more in the dress ornament model for having matched current human's posture are recorded to generate history
Dress up record.
21. method according to claim 20 of virtually dressing up, which is characterized in that
The method also includes: according to presence or off-line state, and/or according to the user's choice is currently at, to judge
Display pattern, the display pattern include the one or more of the first display pattern, the second display pattern, third display pattern;
It is described to show the dress ornament wash with watercolours by the way that the dress ornament rendering result is added on the body image that user sees
Dye is as a result, include one or more steps below:
If the display pattern is first display pattern, when showing the dress ornament rendering result, by the dress ornament wash with watercolours
Dye result is added on the body image reflected by the mirror, virtually dresss up so that user sees from the mirror
Oneself;
If the display pattern is second display pattern, when showing the dress ornament rendering result, by the dress ornament wash with watercolours
Dye result be added to the history dress up record in the vision data in the body image on shown, with show
It has been superimposed the vision data for effect of virtually dressing up;
If the display pattern is the third display pattern, when showing the dress ornament rendering result, by the dress ornament wash with watercolours
It is shown on the body image that dye result is added in the real-time collected vision data, has been superimposed void to show
The vision data of quasi- effect of dressing up.
22. a kind of equipment, comprising:
Memory, for storing non-transitory computer-readable instruction;And
Processor, for running the computer-readable instruction, so that the computer-readable instruction is executed by the processor
Virtual method of dressing up described in any one of Shi Shixian claim 12 to 21.
23. a kind of computer readable storage medium, for storing computer program, which is characterized in that described program is by a meter
Calculation machine or processor realize the method for virtually dressing up as described in any one of claim 12 to 21 claim when executing
Step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910640937.8A CN110363867B (en) | 2019-07-16 | 2019-07-16 | Virtual decorating system, method, device and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910640937.8A CN110363867B (en) | 2019-07-16 | 2019-07-16 | Virtual decorating system, method, device and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110363867A true CN110363867A (en) | 2019-10-22 |
CN110363867B CN110363867B (en) | 2022-11-29 |
Family
ID=68219602
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910640937.8A Active CN110363867B (en) | 2019-07-16 | 2019-07-16 | Virtual decorating system, method, device and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110363867B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070573A (en) * | 2020-07-30 | 2020-12-11 | 象其形(浙江)智能科技有限公司 | AR technology-based shoe purchasing method and shoe purchasing system |
CN112232183A (en) * | 2020-10-14 | 2021-01-15 | 北京字节跳动网络技术有限公司 | Virtual wearing object matching method and device, electronic equipment and computer readable medium |
CN113066125A (en) * | 2021-02-27 | 2021-07-02 | 华为技术有限公司 | Augmented reality method and related equipment thereof |
CN113129450A (en) * | 2021-04-21 | 2021-07-16 | 北京百度网讯科技有限公司 | Virtual fitting method, device, electronic equipment and medium |
CN113140046A (en) * | 2021-04-21 | 2021-07-20 | 上海电机学院 | AR (augmented reality) cross-over control method and system based on three-dimensional reconstruction and computer readable medium |
CN113269072A (en) * | 2021-05-18 | 2021-08-17 | 咪咕文化科技有限公司 | Picture processing method, device, equipment and computer program |
US11195341B1 (en) | 2020-06-29 | 2021-12-07 | Snap Inc. | Augmented reality eyewear with 3D costumes |
WO2021258971A1 (en) * | 2020-06-24 | 2021-12-30 | 北京字节跳动网络技术有限公司 | Virtual clothing changing method and apparatus, and device and medium |
CN114299264A (en) * | 2020-09-23 | 2022-04-08 | 秀铺菲公司 | System and method for generating augmented reality content based on warped three-dimensional model |
CN114445271A (en) * | 2022-04-01 | 2022-05-06 | 杭州华鲤智能科技有限公司 | Method for generating virtual fitting 3D image |
CN114565505A (en) * | 2022-01-17 | 2022-05-31 | 北京新氧科技有限公司 | Garment deformation method, device, equipment and storage medium based on virtual reloading |
CN114723517A (en) * | 2022-03-18 | 2022-07-08 | 唯品会(广州)软件有限公司 | Virtual fitting method, device and storage medium |
WO2024169854A1 (en) * | 2023-02-17 | 2024-08-22 | 北京字跳网络技术有限公司 | Image rendering method and apparatus, and electronic device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006331131A (en) * | 2005-05-26 | 2006-12-07 | Matsushita Electric Works Ltd | Dressing system |
US20120127199A1 (en) * | 2010-11-24 | 2012-05-24 | Parham Aarabi | Method and system for simulating superimposition of a non-linearly stretchable object upon a base object using representative images |
CN108510594A (en) * | 2018-02-27 | 2018-09-07 | 吉林省行氏动漫科技有限公司 | Virtual fit method, device and terminal device |
CN108681956A (en) * | 2018-07-17 | 2018-10-19 | 深圳市艾贝比品牌管理咨询有限公司 | Dress ornament screening technique, terminal and storage medium based on virtual reality |
-
2019
- 2019-07-16 CN CN201910640937.8A patent/CN110363867B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006331131A (en) * | 2005-05-26 | 2006-12-07 | Matsushita Electric Works Ltd | Dressing system |
US20120127199A1 (en) * | 2010-11-24 | 2012-05-24 | Parham Aarabi | Method and system for simulating superimposition of a non-linearly stretchable object upon a base object using representative images |
CN108510594A (en) * | 2018-02-27 | 2018-09-07 | 吉林省行氏动漫科技有限公司 | Virtual fit method, device and terminal device |
CN108681956A (en) * | 2018-07-17 | 2018-10-19 | 深圳市艾贝比品牌管理咨询有限公司 | Dress ornament screening technique, terminal and storage medium based on virtual reality |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021258971A1 (en) * | 2020-06-24 | 2021-12-30 | 北京字节跳动网络技术有限公司 | Virtual clothing changing method and apparatus, and device and medium |
US11195341B1 (en) | 2020-06-29 | 2021-12-07 | Snap Inc. | Augmented reality eyewear with 3D costumes |
WO2022005715A1 (en) * | 2020-06-29 | 2022-01-06 | Snap Inc. | Augmented reality eyewear with 3d costumes |
CN112070573A (en) * | 2020-07-30 | 2020-12-11 | 象其形(浙江)智能科技有限公司 | AR technology-based shoe purchasing method and shoe purchasing system |
CN114299264A (en) * | 2020-09-23 | 2022-04-08 | 秀铺菲公司 | System and method for generating augmented reality content based on warped three-dimensional model |
CN112232183A (en) * | 2020-10-14 | 2021-01-15 | 北京字节跳动网络技术有限公司 | Virtual wearing object matching method and device, electronic equipment and computer readable medium |
CN112232183B (en) * | 2020-10-14 | 2023-04-28 | 抖音视界有限公司 | Virtual wearing object matching method, device, electronic equipment and computer readable medium |
CN113066125A (en) * | 2021-02-27 | 2021-07-02 | 华为技术有限公司 | Augmented reality method and related equipment thereof |
WO2022179603A1 (en) * | 2021-02-27 | 2022-09-01 | 华为技术有限公司 | Augmented reality method and related device thereof |
CN113140046A (en) * | 2021-04-21 | 2021-07-20 | 上海电机学院 | AR (augmented reality) cross-over control method and system based on three-dimensional reconstruction and computer readable medium |
CN113129450A (en) * | 2021-04-21 | 2021-07-16 | 北京百度网讯科技有限公司 | Virtual fitting method, device, electronic equipment and medium |
CN113129450B (en) * | 2021-04-21 | 2024-04-05 | 北京百度网讯科技有限公司 | Virtual fitting method, device, electronic equipment and medium |
CN113269072A (en) * | 2021-05-18 | 2021-08-17 | 咪咕文化科技有限公司 | Picture processing method, device, equipment and computer program |
CN113269072B (en) * | 2021-05-18 | 2024-06-07 | 咪咕文化科技有限公司 | Picture processing method, device, equipment and computer program |
CN114565505A (en) * | 2022-01-17 | 2022-05-31 | 北京新氧科技有限公司 | Garment deformation method, device, equipment and storage medium based on virtual reloading |
CN114723517A (en) * | 2022-03-18 | 2022-07-08 | 唯品会(广州)软件有限公司 | Virtual fitting method, device and storage medium |
CN114445271A (en) * | 2022-04-01 | 2022-05-06 | 杭州华鲤智能科技有限公司 | Method for generating virtual fitting 3D image |
CN114445271B (en) * | 2022-04-01 | 2022-06-28 | 杭州华鲤智能科技有限公司 | Method for generating virtual fitting 3D image |
WO2024169854A1 (en) * | 2023-02-17 | 2024-08-22 | 北京字跳网络技术有限公司 | Image rendering method and apparatus, and electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110363867B (en) | 2022-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363867A (en) | Virtual dress up system, method, equipment and medium | |
US20210177124A1 (en) | Information processing apparatus, information processing method, and computer-readable storage medium | |
US20210365492A1 (en) | Method and apparatus for identifying input features for later recognition | |
Memo et al. | Head-mounted gesture controlled interface for human-computer interaction | |
CN108875633B (en) | Expression detection and expression driving method, device and system and storage medium | |
JP3984191B2 (en) | Virtual makeup apparatus and method | |
CN105391970B (en) | The method and system of at least one image captured by the scene camera of vehicle is provided | |
CN105404392B (en) | Virtual method of wearing and system based on monocular cam | |
US6552729B1 (en) | Automatic generation of animation of synthetic characters | |
CN109377557B (en) | Real-time three-dimensional face reconstruction method based on single-frame face image | |
CN108140105A (en) | Head-mounted display with countenance detectability | |
CN107688391A (en) | A kind of gesture identification method and device based on monocular vision | |
JP2007213623A (en) | Virtual makeup device and method therefor | |
CN107871098B (en) | Method and device for acquiring human face characteristic points | |
CN108460398B (en) | Image processing method and device and cloud processing equipment | |
CN108932654B (en) | Virtual makeup trial guidance method and device | |
CN104364733A (en) | Position-of-interest detection device, position-of-interest detection method, and position-of-interest detection program | |
CN108629248A (en) | A kind of method and apparatus for realizing augmented reality | |
US20220044311A1 (en) | Method for enhancing a user's image while e-commerce shopping for the purpose of enhancing the item that is for sale | |
CN110717391A (en) | Height measuring method, system, device and medium based on video image | |
CN108537126A (en) | A kind of face image processing system and method | |
JP2020177620A (en) | Method of generating 3d facial model for avatar and related device | |
CN110866139A (en) | Cosmetic treatment method, device and equipment | |
CN114333046A (en) | Dance action scoring method, device, equipment and storage medium | |
CN106570747A (en) | Glasses online adaption method and system combining hand gesture recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |