CN107864336A - A kind of image processing method, mobile terminal - Google Patents
A kind of image processing method, mobile terminal Download PDFInfo
- Publication number
- CN107864336A CN107864336A CN201711194076.2A CN201711194076A CN107864336A CN 107864336 A CN107864336 A CN 107864336A CN 201711194076 A CN201711194076 A CN 201711194076A CN 107864336 A CN107864336 A CN 107864336A
- Authority
- CN
- China
- Prior art keywords
- face region
- human face
- nonbody
- virtualization
- main body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 230000001815 facial effect Effects 0.000 claims abstract description 78
- 238000012545 processing Methods 0.000 claims abstract description 46
- 238000000034 method Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 5
- 239000011800 void material Substances 0.000 claims description 4
- 230000006870 function Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 6
- 230000006854 communication Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000005303 weighing Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000000746 body region Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Telephone Function (AREA)
Abstract
The embodiment of the invention discloses a kind of image processing method, mobile terminal, wherein, image processing method includes:Obtain the facial size of N number of human face region in the image of camera collection, and from N number of human face region, determine main body human face region and nonbody human face region, the facial size of facial size and nonbody human face region further according to the main body human face region, the virtualization for carrying out different virtualization degree to nonbody human face region respectively are handled.Virtualization processing is carried out to nonbody human face region in view data respectively with different virtualization degree so as to realize, effectively lifted in the case of single camera is shot to the virtualization discrimination of more personage's main bodys.
Description
Technical field
The present embodiments relate to communication technical field, more particularly to a kind of image processing method, mobile terminal.
Background technology
Nowadays Taking Photographic is increasingly be unable to do without in people's life, in particular with the development of intelligent terminal, intelligent terminal
After realizing camera function, make to take pictures and apply more extensive.Simultaneously either in personal lifestyle or commercial use, all to clapping
According to quality and Consumer's Experience require more and more higher.However, the scene taken pictures is often complicated and changeable, in order that must shoot
Photo adapts to scene complicated and changeable, more highlights the main body of shooting so as to embody stereovision, common processing method is to maintain
The definition of main body is shot, and the region shot beyond main body is subjected to virtualization processing.Virtualization processing is exactly by beyond main body
Region is blurred so that main body is more prominent.
In the prior art, when carrying out virtualization processing to view data, it is generally divided into double take the photograph and blurs and singly take the photograph virtualization, its
In, it is double to take the photograph the depth of view information for blurring and referring to by auxiliary camera, carry out the differentiation of realization body and background, then by background area
Blurred.Virtualization is singly taken the photograph to refer to, in the case where obtaining depth of view information without auxiliary camera, distinguish the main body in view data
And background, and background parts are blurred.
But traditional virtualization mode of singly taking the photograph can only simply distinguish main body and background due to lacking depth of view information, and only to the back of the body
Scape is blurred.Thus cause when the main body of view data is personage, although personage's main body can be highlighted, each one owner is physically weak
Change degree is single.That is, when the quantity of personage's main body is more than one, even if each one owner's body is actually apart from each other, but
Virtualization degree is still consistent, low to each one the virtualization discrimination of owner's body, causes to have lacked stereovision.
The content of the invention
The embodiment of the present invention provides a kind of image processing method and mobile terminal, in the case of solving single camera shooting,
The problem of low to the virtualization discriminations of more personage's main bodys.
In order to solve the above-mentioned technical problem, the present invention is realized in:
First aspect, there is provided a kind of image processing method, applied to mobile terminal, method includes:
Obtain the facial size of N number of human face region in the image of camera collection;
From N number of human face region, main body human face region and nonbody human face region are determined;
According to the facial size of the main body human face region and the facial size of the nonbody human face region, respectively to institute
State the virtualization processing that nonbody human face region carries out different virtualization degree;
Wherein, N is the integer more than 1.
Second aspect, the embodiment of the present invention additionally provide a kind of mobile terminal, including:
Dimension acquisition module, for obtain camera collection image in N number of human face region facial size;
Area determination module, for from N number of human face region, determining main body human face region and nonbody human face region;
Processing module is blurred, for the facial size according to the main body human face region and the nonbody human face region
Facial size, the virtualization for carrying out different virtualization degree to the nonbody human face region respectively are handled;Wherein, N is whole more than 1
Number.
The third aspect, the embodiment of the present invention additionally provide a kind of mobile terminal, including processor, memory and are stored in institute
The computer program that can be run on memory and on the processor is stated, when the computer program is by the computing device
The step of realizing above-mentioned image processing method.
Fourth aspect, the embodiment of the present invention additionally provide a kind of computer-readable recording medium, described computer-readable to deposit
Computer program is stored on storage media, the computer program realizes the step of above-mentioned image processing method when being executed by processor
Suddenly.
In embodiments of the present invention, the facial size of N number of human face region in the image gathered by obtaining camera, and from
In N number of human face region, main body human face region and nonbody human face region are determined, further according to the facial size of the main body human face region
With the facial size of nonbody human face region, the virtualization for carrying out different virtualization degree to nonbody human face region respectively is handled.From
And realize and carry out virtualization processing to nonbody human face region in view data respectively with different virtualization degree, the effectively single shooting of lifting
To the virtualization discrimination of more personage's main bodys in the case of head shooting, and then view data is caused to have more stereovision.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention
The accompanying drawing needed to use is briefly described, it should be apparent that, drawings in the following description are only some implementations of the present invention
Example, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these accompanying drawings
Obtain other accompanying drawings.
Fig. 1 is a kind of flow chart of image processing method of the embodiment of the present invention;
Fig. 2 is the flow chart of another image processing method of the embodiment of the present invention;
Fig. 3 is a kind of block diagram of mobile terminal of the embodiment of the present invention;
Fig. 4 is the block diagram of another mobile terminal of the embodiment of the present invention;
Fig. 5 is a kind of hardware architecture diagram of mobile terminal of the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, rather than whole embodiments.Based on this hair
Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made
Example, belongs to the scope of protection of the invention.
Reference picture 1, a kind of flow chart of image processing method in the embodiment of the present invention is shown, what the present embodiment was provided
Method can be performed by mobile terminal, and image processing method includes:
Step 101, the facial size of N number of human face region in the image of camera collection is obtained.
Specifically, when N number of personage be present in the scene using single camera shooting, wherein, N is the integer more than 1.It is existing
The problem of cannot be distinguished by each personage's distant relationships can be present by having in technology.For drawbacks described above, the embodiment of the present invention can first lead to
Cross and recognition of face is carried out to the view data that shooting obtains, to obtain N number of human face region in the image of camera collection.Obtain again
The facial size of N number of human face region.
Specifically, the facial size of human face region can be characterized by the area value of human face region, people can also be passed through
The width value in face region characterizes.According to Perspective Principles, there is near big and far smaller rule in each object being imaged in a plane.Cause
This, can judge the front and back position of each human face region in the scene according to the width value and/or area value of each human face region
Relation.
In actual applications, the view data includes preview frame data and at least one of frame data of taking pictures.The preview frame
Data refer to the view data obtained under preview mode, and the frame data of taking pictures refer to the picture number obtained under exposal model
According to.I.e. the view data can be acquisition when preview is carried out to photographed scene, or after completing shooting to photographed scene
Obtain.The acquisition modes of the view data do not limit.
Step 102, from N number of human face region, main body human face region and nonbody human face region are determined.
N number of human face region can include main body human face region and nonbody human face region, wherein, main body human face region is
The region where the face that highlights emphatically is needed in view data.Nonbody human face region is people relatively secondary in view data
Face region.By distinguishing the primary-slave relation of each human face region, each human face region in view data can be caused to have more stereovision.
Specifically, due to the personage in shooting near camera, the personage highlighted emphatically, the personage are usually needed
The size of corresponding human face region is maximum in the view data that shooting obtains.Therefore, master is being determined from N number of human face region
During body human face region, the maximum human face region of facial size in N number of human face region can be defined as to main body human face region, and will
Each human face region in N number of human face region in addition to the maximum human face region of the facial size is identified as nonbody face
Region.Main body human face region and nonbody human face region are automatically determined so as to realize.Alternatively, it is also possible to receive user in the picture
The first input, human face region corresponding to first input is defined as main body human face region, and will be in N number of human face region except the
Each human face region outside human face region corresponding to one input is identified as nonbody human face region.Wherein, described first
The predetermined registration operation that main body human face region is selected for user is inputted, the predetermined registration operation includes corresponding to human face region the touch-control of picture
At least one of operation and selection operation to human face region reference numeral.So that the result determined more meets the need of user
Ask.For example, after mobile terminal recognizes N number of human face region, picture can be corresponded to human face region by receiving user and touched
Control operation, determines user is needed using which human face region as main body human face region.N number of human face region is recognized in mobile terminal
Afterwards, N number of human face region can also be numbered, and the numbering of each human face region is showed into user, so as to logical
Cross and receive user to the selection operation of human face region reference numeral, determine user is needed using which human face region as main body face
Region.Wherein, selection operation can be the mode such as touch-control selection or acoustic control selection.For example, user can be by saying face
Region reference numeral, realize the selection to the numbering human face region.
Step 103, according to the facial size of main body human face region and the facial size of nonbody human face region, respectively to non-
Main body human face region carries out the virtualization processing of different virtualization degree.
Specifically, after main body human face region and nonbody human face region from N number of human face region, is determined, can be with true
Determine the facial size of main body human face region and the facial size of nonbody human face region.It may thereby determine that each nonbody face
Region and the absolute value of the facial size difference of main body human face region.And according to the absolute value of the facial size difference, it is right respectively
Each nonbody human face region carries out the virtualization processing of different virtualization degree.According to Perspective Principles, nonbody human face region and main body
The absolute value of the facial size difference of human face region is bigger, more remote from main body human face region, can be to the nonbody human face region
Virtualization degree it is deeper, so as to effectively be lifted in the case of single camera shooting to the virtualization discriminations of more personage's main bodys.
In actual applications, the background area in image can be first obtained, and determines the virtualization degree of the background area.Root
According to the absolute value of the facial size difference and the virtualization degree of background area, it is determined that the virtualization journey of each nonbody human face region
Degree.According still further to the virtualization degree of each nonbody human face region, virtualization processing is carried out to each nonbody human face region respectively.Its
In, the background area is all image-regions in addition to human face region, the virtualization journey of each nonbody human face region in image
Spend the absolute value positive correlation with corresponding facial size difference.
In summary, in embodiments of the present invention, the face of N number of human face region in the image gathered by obtaining camera
Size, and from N number of human face region, main body human face region and nonbody human face region are determined, further according to the main body human face region
Facial size and nonbody human face region facial size, the void of different virtualization degree is carried out to nonbody human face region respectively
Change is handled.Virtualization processing is carried out to nonbody human face region in view data respectively with different virtualization degree so as to realize, effectively
Lifted to the virtualization discrimination of more personage's main bodys in the case of single camera is shot, and then cause view data to have more stereovision.
Reference picture 2, shows the flow chart of another image processing method in the embodiment of the present invention, and the present embodiment is provided
Method can be performed by mobile terminal, image processing method includes:
Step 201, the profile coordinate of N number of human face region in view data is obtained.
After being shot using single camera and getting view data, can by carrying out recognition of face to image,
Determine the human face region in the view data.When the human face region recognized from view data is N number of, due to single camera
Shoot obtained view data and lack depth of view information, i.e., can not obtain the front and back position relation of subject.Accordingly, it is difficult to by
Depth of view information distinguishes the front and back position relation of N number of human face region.At this point it is possible to the profile by obtaining N number of human face region
Coordinate, analyze the front and back position relation of N number of human face region.And then the front and back position relation can be utilized, avoid to whole people
Face region carries out virtualization processing with identical virtualization degree, strengthens the stereovision of each human face region.
Wherein, the view data of acquisition can be when mobile terminal is taken pictures, using the view data of preview mode acquisition.
The view data that exposal model can also be used to obtain for mobile terminal, or other view data.The acquisition of the view data
Mode does not limit.That is, the embodiment of the present invention can be in the case where lacking depth of view information, according to human face region flat
Profile coordinate on face, you can distinguish the front and back position relation of each human face region in view data, and between each other remote
Short range degree.Therefore, the view data obtained by above-mentioned all kinds of modes, which can be used as the embodiment of the present invention and provide method, fits
Object.With it is applied widely the characteristics of.
Step 202, according to profile coordinate, the facial size of N number of human face region is determined.
After the profile coordinate of each human face region in getting view data, can to determine the shape of human face region,
And then according to the feature of the shape, determine the facial size of N number of human face region.For example, after recognition of face, the face of acquisition
Region can be rectangle, and the feature such as the area of the rectangle, width may be incorporated for characterizing the size of human face region.In practical application
In, the factor such as the hair style of personage, headwear, cap may interfere with the accuracy judged human face region length.It is therefore preferable that
, can be using the width of human face region as the feature for weighing human face region size.
Specifically, the profile coordinate can be a coordinate in rectangular coordinate system, wherein, ordinate value is identical,
And two profile coordinates of abscissa difference maximum absolute value, it can be used for the width for weighing the human face region.If with face area
The width in domain characterizes the absolute value of the size, the then difference of two profile coordinate abscissas of human face region, can be used as the people
The size in face region.
Step 203, main body human face region and nonbody human face region are determined from N number of human face region.
Specifically, main body human face region can be determined first from N number of human face region, table is needed with the prominent view data
Existing emphasis, nonbody human face region is determined by the main body human face region.And using the main body human face region as to each non-
The benchmark of main body human face region virtualization.
It is determined that during main body human face region, can be chosen by selection rule set in advance from each human face region
Go out main body human face region.For example, when being imaged to more people, often the people nearer from camera is more to need emphasis to embody
, and according to Perspective Principles, the size for being usually located at people's corresponding human face region in view data of foremost is maximum.Cause
This, can choose the maximum human face region of size in N number of human face region, as main body human face region, and by the view data
In other human face regions as nonbody human face region.As another embodiment, user can also be received in the picture
First input, and human face region corresponding to first input is defined as main body human face region.For example, what can be will identify that is every
Individual human face region shows user on display interface, and after the human face region of user's selection is received, the user is selected
Human face region is as main body human face region, and using other human face regions in the view data as nonbody human face region.So as to
Fully meet the needs of user independently selects, enhance operability and practicality.In actual applications, user can be by touching
The mode such as control or acoustic control selects one in N number of human face region to be used as main body human face region.For another example, can be by N number of face area
Line label is entered in domain according to order from left to right, and shows user.User can select a certain face by selecting label
Region is as main body human face region.And by each face area in N number of human face region in addition to human face region corresponding to the label
Domain is identified as nonbody human face region.
Step 204, it is determined that each nonbody human face region and the absolute value of the facial size difference of main body human face region.
, can be to determine Subject-Human it is determined that after main body human face region and nonbody human face region in N number of human face region
Face region and the facial size of each nonbody human face region, and the people of each nonbody human face region and main body human face region
The absolute value of face dimension difference.
Specifically, it can be characterized every according to the ratio of each nonbody human face region width and main body face peak width
The facial size difference of individual nonbody human face region and main body human face region.For example, it can first calculate each nonbody face area
Domain and the ratio a of main body face peak width.Pass through the absolute value of ratio a and 1 difference | 1-a | * X, weigh each non-master
The how far of body human face region and main body human face region, and then it is determined for the virtualization to each nonbody human face region
Degree.For example, if not the width ratio of main body human face region A and main body human face region is 0.8, then nonbody human face region A with
The absolute value of the facial size difference of main body human face region is 0.2.If not main body human face region B and main body human face region width
Ratio is 1.3, then nonbody human face region B and the absolute value of the facial size difference of main body human face region are 0.3.
Step 205, according to the absolute value of facial size difference, different virtualization degree are carried out to nonbody human face region respectively
Virtualization processing.
Specifically, when being blurred to view data, it will usually which the background area in view data is blurred.And
And in actual applications, because the view data of single camera shooting lacks depth of view information, therefore, to background in view data
The unified virtualization degree of region generally use.
, can be according to the absolute value of the facial size difference and to the background it is determined that after virtualization degree to background area
The virtualization degree in region, it is determined that the virtualization degree of each nonbody human face region.According still further to the void of each nonbody human face region
Change degree, virtualization processing is carried out to each nonbody human face region respectively.Wherein, background area be image in except human face region it
Outer all image-regions, the absolute value positive of the virtualization degree of each nonbody human face region and corresponding facial size difference
Close.
For example, if the virtualization degree to background area is X, the facial size of nonbody human face region and main body human face region
The absolute value of difference is m, then the virtualization degree to nonbody human face region can be mX.And according to the virtualization degree to nonbody
Human face region carries out virtualization processing.When more than two nonbody human face region in view data being present, and each nonbody face
During the absolute value difference of facial size difference corresponding to region, each nonbody human face region will be entered with different virtualization degree
Row virtualization is handled, so as to embody the distance difference of each personage in scene corresponding to view data.Wherein, to nonbody face area
The virtualization degree in domain and the absolute value of the facial size difference are into positive correlation.In actual applications, it is not limited to above-mentioned ratio approach.
In summary, in embodiments of the present invention, by obtaining N number of human face region in view data in same plane
Profile coordinate, distinguish the front and back position relation of each human face region in view data so that this method can lack depth of field letter
In the case of breath, remain to carry out virtualization processing to human face region in view data respectively with different virtualization degree, not only effectively
Lifted to the virtualization discrimination of more personage's main bodys in the case of single camera is shot, and then cause view data to have more stereovision.And
And there is the wider scope of application.Main body human face region is determined further, since can be inputted according to user in the picture first
With nonbody human face region, therefore, can fully meet the needs of user independently selects, enhance operability and practicality.
Reference picture 3, show one of a kind of block diagram of mobile terminal in the embodiment of the present invention.Mobile terminal includes:Size
Acquisition module 31, area determination module 32 and virtualization processing module 33.
Wherein, dimension acquisition module 31, for obtain camera collection image in N number of human face region facial size.
Area determination module 32, for from N number of human face region, determining main body human face region and nonbody human face region.
Processing module 33 is blurred, for the facial size according to the main body human face region and the face of nonbody human face region
Size, the virtualization for carrying out different virtualization degree to nonbody human face region respectively are handled;Wherein, N is the integer more than 1.
Reference picture 4, in a preferred embodiment of the invention, on the basis of Fig. 3, virtualization processing module 33 is wrapped
Include:Difference determination sub-module 331 and virtualization processing submodule 332.Area determination module 32 includes:Automatically determine submodule 321
With input determination sub-module 322.
Wherein, difference determination sub-module 331, for determining the face of each nonbody human face region and main body human face region
The absolute value of dimension difference;
Virtualization processing submodule 332, for the absolute value according to the facial size difference, respectively to the nonbody face area
Domain carries out the virtualization processing of different virtualization degree;Wherein, the width value of the facial size of the human face region including human face region and/
Or the area value of human face region.
Submodule 321 is automatically determined, for the maximum human face region of facial size in N number of human face region to be defined as into main body
Human face region;Each human face region in N number of human face region in addition to the maximum human face region of the facial size is determined respectively
For nonbody human face region.
Determination sub-module 322 is inputted, for receiving the first input of user in the images;By corresponding to first input
Human face region is defined as main body human face region;Will be each in addition to human face region corresponding to the first input in N number of human face region
Human face region is identified as nonbody human face region.
Further, virtualization processing submodule 332 includes:Background area acquiring unit 3321, background blurring degree are true
Order member 3322, face virtualization extent determination unit 3323 and face virtualization processing unit 3324.
Wherein, background area acquiring unit 3321, for obtaining the background area in the image;
Background blurring extent determination unit 3322, for determining the virtualization degree of the background area;
Face blurs extent determination unit 3323, for the absolute value according to the facial size difference and the background area
Virtualization degree, it is determined that the virtualization degree of each nonbody human face region;
Face blurs processing unit 3324, for the virtualization degree according to each nonbody human face region, respectively to every
Individual nonbody human face region carries out virtualization processing;Wherein, the background area is all figures in addition to human face region in the image
As region;The virtualization degree of each nonbody human face region and the absolute value positive correlation of corresponding facial size difference.
Mobile terminal provided in an embodiment of the present invention can realize that mobile terminal is realized in Fig. 1 to Fig. 2 embodiment of the method
Each process, to avoid repeating, repeat no more here.In embodiments of the present invention, obtained and taken the photograph by dimension acquisition module 31
The facial size of N number of human face region in the image gathered as head, and by area determination module 32 from N number of human face region, really
Determine main body human face region and nonbody human face region, recycle face chi of the virtualization processing module 33 according to the main body human face region
The very little and facial size of nonbody human face region, the virtualization for carrying out different virtualization degree to nonbody human face region respectively are handled.
Virtualization processing is carried out to nonbody human face region in view data respectively with different virtualization degree so as to realize, effectively lifting is singly taken the photograph
To the virtualization discrimination of more personage's main bodys in the case of being shot as head, and then view data is caused to have more stereovision.
Fig. 5 is a kind of hardware architecture diagram for the mobile terminal for realizing each embodiment of the present invention.
The mobile terminal 500 includes but is not limited to:It is radio frequency unit 501, mixed-media network modules mixed-media 502, audio output unit 503, defeated
Enter unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor
The part such as 510 and power supply 511.It will be understood by those skilled in the art that the mobile terminal structure shown in Fig. 5 is not formed
Restriction to mobile terminal, mobile terminal can be included than illustrating more or less parts, either combine some parts or
Different part arrangements.In embodiments of the present invention, mobile terminal include but is not limited to mobile phone, tablet personal computer, notebook computer,
Palm PC, car-mounted terminal, wearable device and pedometer etc..
Wherein, radio frequency unit 501, for the facial size of N number of human face region in the image of camera collection, wherein, N is
Integer more than 1.
Processor 510, for from N number of human face region, determining main body human face region and nonbody human face region, and according to
The facial size of the main body human face region and the facial size of nonbody human face region, nonbody human face region is carried out not respectively
With the virtualization processing of virtualization degree.
In summary, in embodiments of the present invention, the face of N number of human face region in the image gathered by obtaining camera
Size, and from N number of human face region, main body human face region and nonbody human face region are determined, further according to the main body human face region
Facial size and nonbody human face region facial size, the void of different virtualization degree is carried out to nonbody human face region respectively
Change is handled.Virtualization processing is carried out to nonbody human face region in view data respectively with different virtualization degree so as to realize, effectively
Lifted to the virtualization discrimination of more personage's main bodys in the case of single camera is shot, and then cause view data to have more stereovision.
It should be understood that in the embodiment of the present invention, radio frequency unit 501 can be used for receiving and sending messages or communication process in, signal
Reception and transmission, specifically, by from base station downlink data receive after, handled to processor 510;In addition, will be up
Data are sent to base station.Generally, radio frequency unit 501 includes but is not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 501 can also by wireless communication system and network and other set
Standby communication.
Mobile terminal has provided the user wireless broadband internet by mixed-media network modules mixed-media 502 and accessed, and such as helps user to receive
Send e-mails, browse webpage and access streaming video etc..
Audio output unit 503 can be receiving by radio frequency unit 501 or mixed-media network modules mixed-media 502 or in memory 509
It is sound that the voice data of storage, which is converted into audio signal and exported,.Moreover, audio output unit 503 can also be provided and moved
The audio output for the specific function correlation that dynamic terminal 500 performs is (for example, call signal receives sound, message sink sound etc.
Deng).Audio output unit 503 includes loudspeaker, buzzer and receiver etc..
Input block 504 is used to receive audio or video signal.Input block 504 can include graphics processor
(Graphics Processing Unit, GPU) 5041 and microphone 5042, graphics processor 5041 is in video acquisition mode
Or the static images or the view data of video obtained in image capture mode by image capture apparatus (such as camera) are carried out
Reason.Picture frame after processing may be displayed on display unit 506.Picture frame after the processing of graphics processor 5041 can be deposited
Storage is transmitted in memory 509 (or other storage mediums) or via radio frequency unit 501 or mixed-media network modules mixed-media 502.Mike
Wind 5042 can receive sound, and can be voice data by such acoustic processing.Voice data after processing can be
The form output of mobile communication base station can be sent to via radio frequency unit 501 by being converted in the case of telephone calling model.
Mobile terminal 500 also includes at least one sensor 505, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 5061, and proximity transducer can close when mobile terminal 500 is moved in one's ear
Display panel 5061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axles) size of acceleration, size and the direction of gravity are can detect that when static, available for identification mobile terminal posture (ratio
Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);Pass
Sensor 505 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet
Meter, thermometer, infrared ray sensor etc. are spent, will not be repeated here.
Display unit 506 is used for the information for showing the information inputted by user or being supplied to user.Display unit 506 can wrap
Display panel 5061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 5061.
User input unit 507 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal
The key signals input that family is set and function control is relevant.Specifically, user input unit 507 include contact panel 5071 and
Other input equipments 5072.Contact panel 5071, also referred to as touch-screen, collect touch operation of the user on or near it
(for example user uses any suitable objects or annex such as finger, stylus on contact panel 5071 or in contact panel 5071
Neighbouring operation).Contact panel 5071 may include both touch detecting apparatus and touch controller.Wherein, touch detection
Device detects the touch orientation of user, and detects the signal that touch operation is brought, and transmits a signal to touch controller;Touch control
Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 510, receiving area
Manage the order that device 510 is sent and performed.It is furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Type realizes contact panel 5071.Except contact panel 5071, user input unit 507 can also include other input equipments
5072.Specifically, other input equipments 5072 can include but is not limited to physical keyboard, function key (such as volume control button,
Switch key etc.), trace ball, mouse, action bars, will not be repeated here.
Further, contact panel 5071 can be covered on display panel 5061, when contact panel 5071 is detected at it
On or near touch operation after, send processor 510 to determine the type of touch event, be followed by subsequent processing device 510 according to touch
The type for touching event provides corresponding visual output on display panel 5061.Although in Figure 5, contact panel 5071 and display
Panel 5061 is the part independent as two to realize the input of mobile terminal and output function, but in some embodiments
In, can be integrated by contact panel 5071 and display panel 5061 and realize input and the output function of mobile terminal, it is specific this
Place does not limit.
Interface unit 508 is the interface that external device (ED) is connected with mobile terminal 500.For example, external device (ED) can include
Line or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless FPDP, storage card end
Mouth, port, audio input/output (I/O) port, video i/o port, earphone end for connecting the device with identification module
Mouthful etc..Interface unit 508 can be used for receive the input (for example, data message, electric power etc.) from external device (ED) and
One or more elements that the input received is transferred in mobile terminal 500 can be used in the He of mobile terminal 500
Data are transmitted between external device (ED).
Memory 509 can be used for storage software program and various data.Memory 509 can mainly include storing program area
And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function
Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as
Voice data, phone directory etc.) etc..In addition, memory 509 can include high-speed random access memory, can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 510 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection
Individual part, by running or performing the software program and/or module that are stored in memory 509, and call and be stored in storage
Data in device 509, the various functions and processing data of mobile terminal are performed, so as to carry out integral monitoring to mobile terminal.Place
Reason device 510 may include one or more processing units;Preferably, processor 510 can integrate application processor and modulatedemodulate is mediated
Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main
Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 510.
Mobile terminal 500 can also include the power supply 511 (such as battery) to all parts power supply, it is preferred that power supply 511
Can be logically contiguous by power-supply management system and processor 510, so as to realize management charging by power-supply management system, put
The function such as electricity and power managed.
In addition, mobile terminal 500 includes some unshowned functional modules, will not be repeated here.
Preferably, the embodiment of the present invention also provides a kind of mobile terminal, including processor 510, memory 509, is stored in
On memory 509 and the computer program that can be run on the processor 510, the computer program are performed by processor 510
Each process of the above-mentioned image processing method embodiments of Shi Shixian, and identical technique effect can be reached, to avoid repeating, here
Repeat no more.
The embodiment of the present invention also provides a kind of computer-readable recording medium, and meter is stored with computer-readable recording medium
Calculation machine program, the computer program realize each process of above-mentioned image processing method embodiment, and energy when being executed by processor
Reach identical technique effect, to avoid repeating, repeat no more here.Wherein, described computer-readable recording medium, such as only
Read memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation
RAM), magnetic disc or CD etc..
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or device including a series of elements not only include those key elements, and
And also include the other element being not expressly set out, or also include for this process, method, article or device institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Other identical element also be present in the process of key element, method, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme is substantially done to prior art in other words
Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions to cause a station terminal (can be mobile phone, computer, service
Device, air conditioner, or network equipment etc.) perform method described in each embodiment of the present invention.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific
Embodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the art
Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lot
Form, belong within the protection of the present invention.
Claims (12)
- A kind of 1. image processing method, applied to mobile terminal, it is characterised in that including:Obtain the facial size of N number of human face region in the image of camera collection;From N number of human face region, main body human face region and nonbody human face region are determined;According to the facial size of the main body human face region and the facial size of the nonbody human face region, respectively to described non- Main body human face region carries out the virtualization processing of different virtualization degree;Wherein, N is the integer more than 1.
- 2. according to the method for claim 1, it is characterised in that the facial size according to the main body human face region and The facial size of the nonbody human face region, respectively at the virtualization to the different virtualization degree of nonbody human face region progress The step of reason, including:It is determined that each nonbody human face region and the absolute value of the facial size difference of main body human face region;According to the absolute value of the facial size difference, respectively to the void of the different virtualization degree of nonbody human face region progress Change is handled;Wherein, the facial size of the human face region includes the width value of human face region and/or the area value of human face region.
- 3. according to the method for claim 2, it is characterised in that the absolute value according to the facial size difference, point The step of other virtualization that different virtualization degree are carried out to the nonbody human face region is handled, including:Obtain the background area in described image;Determine the virtualization degree of the background area;According to the absolute value of the facial size difference and the virtualization degree of the background area, it is determined that each nonbody face area The virtualization degree in domain;According to the virtualization degree of each nonbody human face region, each nonbody human face region is carried out at virtualization respectively Reason;Wherein, the background area is all image-regions in addition to human face region in described image;Each nonbody face The absolute value positive correlation of the virtualization degree in region and corresponding facial size difference.
- 4. according to the method for claim 1, it is characterised in that it is described from N number of human face region, determine main body human face region The step of with nonbody human face region, including:The maximum human face region of facial size in N number of human face region is defined as main body human face region;Each human face region in N number of human face region in addition to the maximum human face region of the facial size is identified as non- Main body human face region.
- 5. according to the method for claim 1, it is characterised in that it is described from N number of human face region, determine main body human face region The step of with nonbody human face region, including:Receive first input of the user in described image;Human face region corresponding to described first input is defined as main body human face region;Each human face region in N number of human face region in addition to human face region corresponding to the first input is identified as nonbody Human face region.
- A kind of 6. mobile terminal, it is characterised in that including:Dimension acquisition module, for obtain camera collection image in N number of human face region facial size;Area determination module, for from N number of human face region, determining main body human face region and nonbody human face region;Processing module is blurred, for the facial size according to the main body human face region and the face of the nonbody human face region Size, the virtualization for carrying out different virtualization degree to the nonbody human face region respectively are handled;Wherein, N is the integer more than 1.
- 7. mobile terminal according to claim 6, it is characterised in that the virtualization processing module, including:Difference determination sub-module, for determining the exhausted of each nonbody human face region and the facial size difference of main body human face region To value;Virtualization processing submodule, for the absolute value according to the facial size difference, respectively to the nonbody human face region Carry out the virtualization processing of different virtualization degree;Wherein, the width value of the facial size of the human face region including human face region and/ Or the area value of human face region.
- 8. mobile terminal according to claim 7, it is characterised in that the virtualization processing submodule, including:Background area acquiring unit, for obtaining the background area in described image;Background blurring extent determination unit, for determining the virtualization degree of the background area;Face blurs extent determination unit, for the absolute value according to the facial size difference and the virtualization of the background area Degree, it is determined that the virtualization degree of each nonbody human face region;Face blurs processing unit, for the virtualization degree according to each nonbody human face region, respectively to each non-master Body human face region carries out virtualization processing;Wherein, the background area is all images in addition to human face region in described image Region;The virtualization degree of each nonbody human face region and the absolute value positive correlation of corresponding facial size difference.
- 9. mobile terminal according to claim 6, it is characterised in that the area determination module, including:Submodule is automatically determined, for the maximum human face region of facial size in N number of human face region to be defined as into main body face area Domain;Each human face region in N number of human face region in addition to the maximum human face region of the facial size is identified as non- Main body human face region.
- 10. mobile terminal according to claim 6, it is characterised in that the area determination module, including:Determination sub-module is inputted, for receiving first input of the user in described image;By people corresponding to the described first input Face region is defined as main body human face region;By in N number of human face region except everyone in addition to human face region corresponding to the first input Face region is identified as nonbody human face region.
- 11. a kind of mobile terminal, it is characterised in that including processor, memory and be stored on the memory and can be in institute The computer program run on processor is stated, the computer program is realized such as claim 1 to 5 during the computing device Any one of image processing method the step of.
- 12. a kind of computer-readable recording medium, it is characterised in that computer journey is stored on the computer-readable recording medium Sequence, the image processing method as any one of claim 1 to 5 is realized when the computer program is executed by processor Step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711194076.2A CN107864336B (en) | 2017-11-24 | 2017-11-24 | A kind of image processing method, mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711194076.2A CN107864336B (en) | 2017-11-24 | 2017-11-24 | A kind of image processing method, mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107864336A true CN107864336A (en) | 2018-03-30 |
CN107864336B CN107864336B (en) | 2019-07-26 |
Family
ID=61703437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711194076.2A Active CN107864336B (en) | 2017-11-24 | 2017-11-24 | A kind of image processing method, mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107864336B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110198421A (en) * | 2019-06-17 | 2019-09-03 | Oppo广东移动通信有限公司 | Method for processing video frequency and Related product |
CN112351204A (en) * | 2020-10-27 | 2021-02-09 | 歌尔智能科技有限公司 | Photographing method, photographing device, mobile terminal and computer readable storage medium |
CN112672102A (en) * | 2019-10-15 | 2021-04-16 | 杭州海康威视数字技术股份有限公司 | Video generation method and device |
CN113014830A (en) * | 2021-03-01 | 2021-06-22 | 鹏城实验室 | Video blurring method, device, equipment and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11893668B2 (en) | 2021-03-31 | 2024-02-06 | Leica Camera Ag | Imaging system and method for generating a final digital image via applying a profile to image information |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008118348A (en) * | 2006-11-02 | 2008-05-22 | Nikon Corp | Electronic camera and program |
JP2008233470A (en) * | 2007-03-20 | 2008-10-02 | Sanyo Electric Co Ltd | Diaphragm controller and image processor |
JP2009064188A (en) * | 2007-09-05 | 2009-03-26 | Seiko Epson Corp | Image processing apparatus, image processing method, and image processing system |
US20110037877A1 (en) * | 2009-08-13 | 2011-02-17 | Fujifilm Corporation | Image processing method, image processing apparatus, computer readable medium, and imaging apparatus |
CN102932541A (en) * | 2012-10-25 | 2013-02-13 | 广东欧珀移动通信有限公司 | Mobile phone photographing method and system |
CN104751405A (en) * | 2015-03-11 | 2015-07-01 | 百度在线网络技术(北京)有限公司 | Method and device for blurring image |
CN104794462A (en) * | 2015-05-11 | 2015-07-22 | 北京锤子数码科技有限公司 | Figure image processing method and device |
CN104967786A (en) * | 2015-07-10 | 2015-10-07 | 广州三星通信技术研究有限公司 | Image selection method and device |
CN105303514A (en) * | 2014-06-17 | 2016-02-03 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus |
CN106971165A (en) * | 2017-03-29 | 2017-07-21 | 武汉斗鱼网络科技有限公司 | The implementation method and device of a kind of filter |
-
2017
- 2017-11-24 CN CN201711194076.2A patent/CN107864336B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008118348A (en) * | 2006-11-02 | 2008-05-22 | Nikon Corp | Electronic camera and program |
JP2008233470A (en) * | 2007-03-20 | 2008-10-02 | Sanyo Electric Co Ltd | Diaphragm controller and image processor |
JP2009064188A (en) * | 2007-09-05 | 2009-03-26 | Seiko Epson Corp | Image processing apparatus, image processing method, and image processing system |
US20110037877A1 (en) * | 2009-08-13 | 2011-02-17 | Fujifilm Corporation | Image processing method, image processing apparatus, computer readable medium, and imaging apparatus |
CN102932541A (en) * | 2012-10-25 | 2013-02-13 | 广东欧珀移动通信有限公司 | Mobile phone photographing method and system |
CN105303514A (en) * | 2014-06-17 | 2016-02-03 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus |
CN104751405A (en) * | 2015-03-11 | 2015-07-01 | 百度在线网络技术(北京)有限公司 | Method and device for blurring image |
CN104794462A (en) * | 2015-05-11 | 2015-07-22 | 北京锤子数码科技有限公司 | Figure image processing method and device |
CN104967786A (en) * | 2015-07-10 | 2015-10-07 | 广州三星通信技术研究有限公司 | Image selection method and device |
CN106971165A (en) * | 2017-03-29 | 2017-07-21 | 武汉斗鱼网络科技有限公司 | The implementation method and device of a kind of filter |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110198421A (en) * | 2019-06-17 | 2019-09-03 | Oppo广东移动通信有限公司 | Method for processing video frequency and Related product |
CN110198421B (en) * | 2019-06-17 | 2021-08-10 | Oppo广东移动通信有限公司 | Video processing method and related product |
CN112672102A (en) * | 2019-10-15 | 2021-04-16 | 杭州海康威视数字技术股份有限公司 | Video generation method and device |
CN112672102B (en) * | 2019-10-15 | 2023-03-24 | 杭州海康威视数字技术股份有限公司 | Video generation method and device |
CN112351204A (en) * | 2020-10-27 | 2021-02-09 | 歌尔智能科技有限公司 | Photographing method, photographing device, mobile terminal and computer readable storage medium |
CN113014830A (en) * | 2021-03-01 | 2021-06-22 | 鹏城实验室 | Video blurring method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107864336B (en) | 2019-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107592466A (en) | A kind of photographic method and mobile terminal | |
CN107864336A (en) | A kind of image processing method, mobile terminal | |
CN107820011A (en) | Photographic method and camera arrangement | |
CN107833178A (en) | A kind of image processing method, device and mobile terminal | |
CN107948499A (en) | A kind of image capturing method and mobile terminal | |
CN107817939A (en) | A kind of image processing method and mobile terminal | |
CN107833177A (en) | A kind of image processing method and mobile terminal | |
CN107734260A (en) | A kind of image processing method and mobile terminal | |
CN108989678A (en) | A kind of image processing method, mobile terminal | |
CN107977652A (en) | The extracting method and mobile terminal of a kind of screen display content | |
CN107707762A (en) | A kind of method for operating application program and mobile terminal | |
CN107845057A (en) | One kind is taken pictures method for previewing and mobile terminal | |
CN109409244A (en) | A kind of object puts the output method and mobile terminal of scheme | |
CN107846583A (en) | A kind of image shadow compensating method and mobile terminal | |
CN108038825A (en) | A kind of image processing method and mobile terminal | |
CN110113528A (en) | A kind of parameter acquiring method and terminal device | |
CN108037966A (en) | A kind of interface display method, device and mobile terminal | |
CN107682639A (en) | A kind of image processing method, device and mobile terminal | |
CN108320263A (en) | A kind of method, device and mobile terminal of image procossing | |
CN107483836A (en) | A kind of image pickup method and mobile terminal | |
CN108257104A (en) | A kind of image processing method and mobile terminal | |
CN107888833A (en) | A kind of image capturing method and mobile terminal | |
CN107783709A (en) | The inspection method and mobile terminal of a kind of image | |
CN107682637A (en) | A kind of image pickup method, mobile terminal and computer-readable recording medium | |
CN107734172A (en) | A kind of method for information display and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |