CN105791674B - Electronic equipment and focusing method - Google Patents
Electronic equipment and focusing method Download PDFInfo
- Publication number
- CN105791674B CN105791674B CN201610082815.8A CN201610082815A CN105791674B CN 105791674 B CN105791674 B CN 105791674B CN 201610082815 A CN201610082815 A CN 201610082815A CN 105791674 B CN105791674 B CN 105791674B
- Authority
- CN
- China
- Prior art keywords
- training
- parameter
- scene
- trained
- focusing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
The present invention provides a kind of electronic equipment and focusing method, the focusing method includes: acquisition preview image;Photographed scene and object are identified from the preview image;Based on the photographed scene and the object, obtain the set of the first parameter and at least one of the set of the second parameter, first parameter indicates a possibility that specific region under the photographed scene in the preview image is as focusing area, and second parameter indicates a possibility that region under the photographed scene where the object is as focusing area;Based on described in first parameter and second parameter at least one, determine the focusing area in the preview image;And it is based on the focusing area, it focuses.
Description
Technical field
The present invention relates to the fields of camera shooting processing, more particularly it relates to a kind of electronic equipment and focusing method.
Background technique
Recently, in order to promote the degree of automation of the electronic equipment with shooting function when shooting, simplify user's operation,
The positive increased popularity of Atomatic focusing method.In a kind of Atomatic focusing method, the face in image to be captured is detected, and will focusing
Region is determined as the region where face.
However, such focusing method does not consider photographed scene and the use habit of user, focusing area is single,
It is unable to satisfy the diversified composition of user and shooting demand, user experience is poor.
Summary of the invention
In view of the above, the present invention provides a kind of electronic equipment and focusing method, can be based on photographed scene
It is focused in a manner of meeting user's use habit, so that focusing more intelligence meets user individual with efficiently
Demand improves user experience.
An embodiment according to the present invention provides a kind of focusing method, comprising: obtains preview image;From the preview graph
Photographed scene and object are identified as in;Based on the photographed scene and the object, the set and the second ginseng of the first parameter are obtained
At least one of several set, first parameter indicate the specific region under the photographed scene in the preview image
A possibility that as focusing area, second parameter indicate the region conduct pair under the photographed scene where the object
A possibility that burnt region;Based on described in first parameter and second parameter at least one, determine the preview graph
Focusing area as in;And it is based on the focusing area, it focuses.
Another embodiment according to the present invention, provides a kind of electronic equipment, comprising: acquiring unit obtains preview image;Place
Unit is managed, photographed scene and object are identified from the preview image;Based on the photographed scene and the object, first is obtained
The set of parameter and at least one of the set of the second parameter, first parameter indicate described pre- under the photographed scene
A possibility that specific region in image is look at as focusing area, second parameter indicate described right under the photographed scene
As place region as focusing area a possibility that;Based on described in first parameter and second parameter at least one
It is a, determine the focusing area in the preview image;And focusing unit, it is based on the focusing area, is focused.
Another embodiment according to the present invention provides a kind of focusing mechanism, comprising: first acquisition unit obtains preview graph
Picture;Recognition unit identifies photographed scene and object from the preview image;Second acquisition unit is based on the photographed scene
With the object, the set of the first parameter and at least one of the set of the second parameter are obtained, first parameter indicates
A possibility that specific region under the photographed scene in the preview image is as focusing area, second parameter indicate to exist
A possibility that region under the photographed scene where the object is as focusing area;Determination unit, based on first ginseng
In several and second parameter it is described at least one, determine the focusing area in the preview image;And focusing unit, base
In the focusing area, focus.
Another embodiment according to the present invention provides a kind of computer program product, including computer readable storage medium,
Computer program instructions are stored on the computer readable storage medium, the computer program instructions are transported by computer
Following steps are executed when row: obtaining preview image;Photographed scene and object are identified from the preview image;Based on the shooting
Scene and the object obtain the set of the first parameter and at least one of the set of the second parameter, first parameter list
Show a possibility that specific region under the photographed scene in the preview image is as focusing area, second parameter list
Show a possibility that region under the photographed scene where the object is as focusing area;Based on first parameter and institute
State in the second parameter it is described at least one, determine the focusing area in the preview image;And it is based on the focusing area,
It focuses.
In the electronic equipment and focusing method of the embodiment of the present invention, photographed scene and object are identified from preview image,
And focusing area is determined based on photographed scene and object, habit is used so as to meet user according to different photographed scenes
Used mode is focused, so that focusing more intelligence meets the needs of user individual, improve user experience with efficiently.
Detailed description of the invention
Fig. 1 is the flow chart for being illustrated schematically the key step of focusing method according to an embodiment of the present invention;
Fig. 2 is the collection for schematically showing the set and the second parameter of the first parameter in the focusing method of the embodiment of the present invention
The exemplary figure of the performance of conjunction;
Fig. 3 is the block diagram for schematically showing the main configuration of electronic equipment of the embodiment of the present invention;And
Fig. 4 is the block diagram for schematically showing the main configuration of focusing mechanism of the embodiment of the present invention.
Specific embodiment
Below with reference to attached drawing the present invention is described in detail embodiment.
Firstly, describing focusing method according to an embodiment of the present invention.The focusing method of the embodiment of the present invention is applied to such as
The electronic equipment with camera unit of camera, mobile phone, tablet computer etc..
Hereinafter, by the focusing method of the present invention is described in detail embodiment referring to Fig.1.
As shown in Figure 1, firstly, obtaining preview image in step S110.Specifically, for example, utilizing the electronic equipment
When being shot, image data frame to be captured can be acquired by the camera unit as the preview image.
Next, identifying photographed scene and object from the preview image in step S120.
Specifically, the preview can be identified by the various image-recognizing methods of the known in the art and following exploitation
Whether image corresponds to preset photographed scene, and identifies in the photographed scene with the presence or absence of preset object.Illustratively,
The photographed scene may include at least one of landscape, personage, night scene etc..Illustratively, the object may include
At least one of face, vehicle etc..
After identifying photographed scene and object in the preview image, in step S130, it is based on the shooting field
Scape and the object obtain the set of the first parameter and at least one of the set of the second parameter.
Specifically, the set of first parameter includes one or more first parameters.The set packet of second parameter
Include one or more second parameters.First parameter indicates the specific region under the photographed scene in the preview image
A possibility that as focusing area.The specific region can be arranged as suitably desired by those skilled in the art.For example,
The specific region can be in top left region, lower left region, right regions, lower right area, intermediate region in image etc.
At least one.Second parameter indicates possibility of the region as focusing area under the photographed scene where the object
Property.
In one embodiment, the set of first parameter and the set of second parameter can be and be obtained ahead of time.
In the first embodiment, the set of first parameter and at least one of the set of second parameter can be by user hands
Dynamic setting.In a second embodiment, can be obtained and being learnt based on user preference first parameter set and
At least one of the set of second parameter.
Specifically, the set about first parameter, it is possible, firstly, to obtain multiple training images and respectively and often
The corresponding multiple trained focusing areas of a training image.
In one embodiment, training image image captured before can be user.In another embodiment, institute
It states training image and is also possible to otherwise image obtained, such as obtained by camera ISP (image-signal processor)
Image statistics, even from other storage mediums or the image acquired in the network etc..
The trained focusing area is the region focused in the training image.In one embodiment, the training focusing
Region can automatically analyze out from the training image.For example, image captured before the training image is user
In the case of, it can be from training focusing area described in captured automated image analysis.In another embodiment, the training focusing
It region can be by user's hand labeled.For example, being from other storage mediums or the figure acquired in the network in the training image
As in the case where, it can indicate that user marks its desired focusing area on acquired training image, to obtain institute
State trained focusing area.
In addition, the multiple trained focusing area includes trained focusing area corresponding with the specific region.Specifically,
For example, the specific region can be the top left region of preview image.At this point, the multiple trained focusing area includes training figure
The top left region of picture.In addition, the multiple trained focusing area may also include for example lower left region, intermediate region, right regions,
At least one of lower right area, intermediate region etc..
Next, identifying training corresponding to the training image from Training scene set for each training image
Scene.The Training scene set may include preset multiple scenes, for example, landscape, personage, night scene, cuisines etc..With it is above-mentioned
It is similar, it can be identified corresponding to each training image by the various image-recognizing methods of the known in the art and following exploitation
Training scene.
In addition, the Training scene set includes Training scene corresponding with the photographed scene.Specifically, for example, institute
Stating photographed scene corresponding to preview image can be landscape.At this point, the Training scene set includes landscape.In addition, described
Training scene set may also include such as at least one of personage, night scene, cuisines etc..
After the Training scene and corresponding trained focusing area for identifying each training image as described above,
It can be based on the Training scene and the trained focusing area, calculating indicates each Training scene and each trained area of focusing
The set of first parameter of relevance between domain.
More specifically, for example, the number that each trained focusing area occurs under each Training scene can be counted first.Example
Such as, in the case where Training scene is " landscape ", training focusing area occurs s1 times for the case where " top left region ", training pair
The case where the case where burnt region is " lower left region " occurs s2 times, and training focusing area is " intermediate region " occurs s3 times,
The case where the case where training focusing area is " right regions " occurs s4 times, and training focusing area is " lower right area " occurs
S5 times;Etc..
Next, for each trained focusing area, being occurred based on the trained focusing area under each Training scene
Number and the summation of number that occurs of each trained focusing area between ratio, determine the Training scene and the training
The associated probability of focusing area, to form the set of first parameter.
Certainly, the foregoing is merely examples.Those skilled in the art can herein basis on pass through various other ways
Calculate the set for indicating first parameter of the relevance between each Training scene and each trained focusing area.
On the other hand, the set about second parameter, it is possible, firstly, to be obtained more by processing similar to the above
A training image and respectively multiple trained focusing areas corresponding with each trained object.
Next, can be identified for each training image from Training scene set by processing similar to the above
Training scene corresponding to the training image.Equally, the Training scene set includes instruction corresponding with the photographed scene
Practice scene.
In addition, each training image can be identified from training object set and is located at and the training image phase
The training object at training focusing area answered.Specifically, known in the art and exploitation in the future various image recognitions can be passed through
Algorithm identifies the training object at the trained focusing area, for example, face, vehicle etc..Equally, the trained object set
Conjunction includes trained object corresponding with the object.Specifically, for example, the preview image may include face object.At this point,
The trained object set includes face.In addition, the trained object set may also include in such as vehicle, pet etc. extremely
It is one few.
Training scene is being identified to each training image as described above and is being trained at focusing area positioned at corresponding
After training object, it can be based on the Training scene and the trained object, calculating indicates each Training scene and each instruction
Practice the set of second parameter of the relevance between object.
More specifically, for example, the number that each trained object occurs under each Training scene can be counted first.For example,
In the case where Training scene is " landscape ", training object occurs v1 times for the case where " face ", and training object is " vehicle "
The case where occur v2 times, training object be " pet " the case where occur v3 times;Etc..
Next, under each Training scene, for each trained object, the number that is occurred based on the trained object with
Ratio between the summation for the number that each trained object occurs determines that the Training scene and the trained object are associated general
Rate, to form the set of second parameter.
Certainly, the foregoing is merely examples.Those skilled in the art can herein basis on pass through various other ways
Calculate the set for indicating second parameter of the relevance between each Training scene and each trained object.
Fig. 2 is the exemplary figure for schematically showing the set of set and the second parameter of the first parameter.As shown in Fig. 2, with
The probability of different focus object and focusing area is corresponded under the form reflection different scenes of probability graph.Specifically, in scene n
In, the probability of user's selection focusing main body t isUser select focusing area m probability be
More than, describe the illustrative acquisition modes of first parameter and second parameter.
Hereafter, the focusing method proceeds to step S140.In step S140, it is based on first parameter and described second
In parameter it is described at least one, determine the focusing area in the preview image.
Specifically, can based on described in first parameter and second parameter at least one, by presetting
Algorithm, determine the focusing area.
It in the first embodiment, can will be under the photographed scene in the case where being based only upon the first parameter or the second parameter
It is determined as focusing area with region corresponding to the maximum value in the set of first parameter or second parameter.
In a second embodiment, in the case where being based on both the first parameter and the second parameter, expression formula p can be passed througho×pu
Or expression formula po+puIt is determined as the probability of focusing area to calculate user for the different zones in preview image, wherein poFor with
Probability of the family to different object preferences, puIt is user to the probability of different zones preference.It is then possible to which the probability obtained will be calculated
Region corresponding to maximum probability in the middle is determined as focusing area.
Certainly, above-described expression formula is merely illustrative.Those skilled in the art can herein basis on pass through other
Various expression formulas are determined as the probability of focusing area to calculate user for the different zones in preview image.
After focusing area has been determined, the focusing method proceeds to step S150, and is based on the focusing area, into
Row focusing.
More than, the focusing method of the embodiment of the present invention is described referring to Figures 1 and 2.
In the focusing method of the embodiment of the present invention, photographed scene and object are identified from preview image, and based on shooting
Scene and object and determine focusing area, so as to according to different photographed scenes in a manner of meeting user's use habit into
Row focusing, so that focusing more intelligence meets the needs of user individual, improve user experience with efficiently.
It optionally, in one embodiment, can be from the preview graph before identifying the photographed scene and the object
It is extracted as in for identifying more important feature, such as edge feature, histogram of gradients, gray-scale statistical amount, color channel system
One or any combination thereof in metering, corner feature etc..Then, it is based on extracted feature, from the preview image
Identify the photographed scene and the object, treatment process is known to those skilled in the art, and this will not be detailed here.Equally,
Before recognition training scene and training object, the feature can be extracted from training object, and be based on extracted feature
It is identified.
As a result, in the focusing method of this embodiment of the invention, redundancy is not only reduced, and improves calculating speed
Degree, improves user experience.
Optionally, in addition, in another embodiment, first parameter set and second parameter set not only
It can be that off-line training is good, can also be the operation with user and real-time update.Specifically, in this embodiment, may be used
To receive the input operation for specifying focusing area from the preview image.Then, specified focusing area is determined as institute
State the focusing area in preview image.That is, the focusing area for specifying user is prior to the focusing area that automatically determines.So
Afterwards, it based on specified focusing area, can update in the set of first parameter and the set of second parameter extremely
It is one few.That is, using at this time preview image and specified focusing area as new training image and training focusing area,
And the set of first parameter and at least one of the set of second parameter are calculated by mode as described above.
As a result, in the focusing method of the present embodiment, constantly learn user preference, even and if user preference become
Change also can be carried out corresponding update, so that focusing area more meets the newest use habit of user.
Optionally, in addition, in another embodiment, in order to enable in the case where different user uses the electronic equipment
Auto-focusing can be carried out, the method for the embodiment of the present invention includes different user parameter information.That is, first parameter
The set of set and second parameter belongs to the first user parameter information in multiple user parameter informations.
At this point, the focusing method can receive user information corresponding with the first user.The user information for example can be with
For the identification information of User ID etc..And it is based on the user information, determines that first uses from the multiple user parameter information
Family parameter information, thus based at least one and the user information described in first parameter and second parameter,
Determine in first parameter and second parameter it is described at least one.
As a result, even for the same preview image under same scene, also can based on different user parameter information reality
Existing different focus, enables the electronic equipment to be used by more people, and focusing area meets the different use of each user
Habit.
More than, the focusing method of the embodiment of the present invention is described referring to Figures 1 and 2.
The electronic equipment of the embodiment of the present invention is described next, with reference to Fig. 3.The electronic equipment of the embodiment of the present invention is such as
The electronic equipment with camera unit of camera, mobile phone, tablet computer etc..
As shown in figure 3, the electronic equipment 300 of the embodiment of the present invention includes: acquiring unit 310, processing unit 320 and focusing
Unit 330.
The acquiring unit 310 obtains preview image.The processing unit 320 identifies shooting field from the preview image
Scape and object;Based on the photographed scene and the object, obtain in the set of the first parameter and the set of the second parameter extremely
One few, first parameter indicates the specific region under the photographed scene in the preview image as focusing area
Possibility, second parameter indicate possibility of the region as focusing area under the photographed scene where the object
Property;Based on described in first parameter and second parameter at least one, determine the focusing area in the preview image
Domain.The focusing unit 330 is based on the focusing area, focuses.
In one embodiment, the processing unit is pre-configured with are as follows: obtain multiple training images and respectively with each instruction
Practice the corresponding multiple trained focusing areas of image, the multiple trained focusing area includes instruction corresponding with the specific region
Practice focusing area;For each training image, Training scene corresponding to the training image is identified from Training scene set,
The Training scene set includes Training scene corresponding with the photographed scene;And based on being identified to each training image
Training scene and trained focusing area corresponding with the training image, calculate indicate each Training scene with it is each
The set of first parameter of relevance between training focusing area.
The processing unit is also pre-configured with are as follows: obtains multiple training images and opposite with each trained object respectively
The multiple trained focusing areas answered;For each training image, identified corresponding to the training image from Training scene set
Training scene, the Training scene set includes Training scene corresponding with the photographed scene;For each training image,
It is identified from training object set and is located at the training object trained at focusing area corresponding with the training image, the instruction
Practicing object set includes trained object corresponding with the object;And based on the training place identified to each training image
Scape and the training object at corresponding training focusing area, calculating indicates each Training scene and each trained object
Between relevance second parameter set.
In another embodiment, the processing unit is also pre-configured with are as follows: counts each training pair under each Training scene
The number that burnt region occurs;And under each Training scene, for each trained focusing area, based on training focusing area
Ratio between the summation for the number that the number and each trained focusing area that domain occurs occur, determines the Training scene and institute
The associated probability of trained focusing area is stated, to form the set of first parameter.The processing unit is also pre-configured with are as follows: system
Count the number that each trained object occurs under each Training scene;And under each Training scene, for each trained object,
Ratio between the summation for the number that the number and each trained object occurred based on the trained object is occurred, determines the instruction
Practice scene and the trained associated probability of object, to form the set of second parameter.
In another embodiment, the processing unit is additionally configured to: extracting feature from the preview image;And it is based on
Extracted feature identifies the photographed scene and the object from the preview image.
In another embodiment, the electronic equipment further include: input unit receives from the preview image specified pair
The input in burnt region operates;The processing unit is additionally configured to: specified focusing area is determined as in the preview image
Focusing area;And based on specified focusing area, update the set of first parameter and the collection of second parameter
At least one of close.
In another embodiment, the set of first parameter and the set of second parameter belong to multiple customer parameters
The first user parameter information in information, the electronic equipment further include: input unit receives user corresponding with the first user
Information;Also, the processing unit is additionally configured to: being based on the user information, is determined from the multiple user parameter information
First user parameter information;And based at least one and the user described in first parameter and second parameter
Information, determine in first parameter and second parameter it is described at least one.
The configuration and operation of each unit of the electronic equipment of the embodiment of the present invention have been described above described referring to Figures 1 and 2
Focusing method in be described in detail, be not repeated herein.
In the electronic equipment of the embodiment of the present invention, photographed scene and object are identified from preview image, and based on shooting
Scene and object and determine focusing area, so as to according to different photographed scenes in a manner of meeting user's use habit into
Row focusing, so that focusing more intelligence meets the needs of user individual, improve user experience with efficiently.
The focusing mechanism of the embodiment of the present invention is described next, with reference to Fig. 4.The focusing mechanism of the embodiment of the present invention is applied to
The electronic equipment with camera unit of camera, mobile phone, tablet computer etc.
As shown in figure 4, the focusing mechanism 400 of the embodiment of the present invention include: first acquisition unit 410, recognition unit 420,
Second acquisition unit 430, determination unit 440 and focusing unit 450.
The first acquisition unit 410 obtains preview image.The recognition unit 420 identifies bat from the preview image
Take the photograph scene and object.The second acquisition unit 430 is based on the photographed scene and the object, obtains the set of the first parameter
At least one of with the set of the second parameter, first parameter is indicated under the photographed scene in the preview image
A possibility that specific region is as focusing area, second parameter indicate the area under the photographed scene where the object
A possibility that domain is as focusing area.The determination unit 440 is based on described in first parameter and second parameter
At least one, determines the focusing area in the preview image.The focusing unit 450 is based on the focusing area, carries out pair
It is burnt.
In one embodiment, the focusing mechanism 400 further include: third acquiring unit, obtain multiple training images and
Multiple trained focusing areas corresponding with each training image respectively, the multiple trained focusing area include with it is described specific
The corresponding trained focusing area in region;Second recognition unit identifies described each training image from Training scene set
Training scene corresponding to training image, the Training scene set include Training scene corresponding with the photographed scene;With
And first computing unit, based on the Training scene identified to each training image and corresponding with the training image
Training focusing area, calculates first parameter for indicating the relevance between each Training scene and each trained focusing area
Set.
The focusing mechanism 400 further include: third acquiring unit, obtain multiple training images and respectively with each instruction
Practice the corresponding multiple trained focusing areas of object;Second recognition unit knows each training image from Training scene set
Not Chu Training scene corresponding to the training image, the Training scene set includes training corresponding with the photographed scene
Scene;Third recognition unit identifies from training object set and is located at and the training image phase for each training image
The training object at training focusing area answered, the trained object set include trained object corresponding with the object;With
And second computing unit, based on the Training scene identified to each training image and it is located at corresponding training focusing area
The training object at place calculates the collection for indicating second parameter of the relevance between each Training scene and each trained object
It closes.
In another embodiment, first computing unit includes: the first statistic unit, counts each under each Training scene
The number that a trained focusing area occurs;And first computation unit, under each Training scene, for each training pair
Between the summation for the number that burnt region, the number occurred based on the trained focusing area and each trained focusing area are occurred
Ratio determines the Training scene and the trained associated probability of focusing area, to form the set of first parameter.
Second computing unit includes: the second statistic unit, counts each trained object under each Training scene and occurs
Number;And second computation unit, for each trained object, is based on the training pair under each Training scene
Ratio between the summation of the number occurred as the number of appearance and each trained object, determines the Training scene and the instruction
Practice the associated probability of object, to form the set of second parameter.
In another embodiment, the focusing mechanism 400 further include: feature extraction unit is mentioned from the preview image
Take feature;And scene and object identification unit identify the shooting field based on extracted feature from the preview image
Scape and the object.
In another embodiment, the focusing mechanism 400 further include: the first receiving unit is received from the preview image
In specify focusing area input operation;Specified focusing area is determined as pair in the preview image by designating unit
Burnt region;And updating unit updates the set and second parameter of first parameter based on specified focusing area
At least one of set.
In another embodiment, the set of first parameter and the set of second parameter belong to multiple customer parameters
The first user parameter information in information, the focusing mechanism 400 further include: the second receiving unit, also, described second obtains
Unit includes: parameter information determination unit, is based on the user information, determines that first uses from the multiple user parameter information
Family parameter information;And parameter determination unit, based on described in first parameter and second parameter at least one and
The user information, determine in first parameter and second parameter it is described at least one.
The configuration and operation of each unit of the focusing mechanism of the embodiment of the present invention have been described above described referring to Figures 1 and 2
Focusing method in be described in detail, be not repeated herein.
In the focusing mechanism of the embodiment of the present invention, photographed scene and object are identified from preview image, and based on shooting
Scene and object and determine focusing area, so as to according to different photographed scenes in a manner of meeting user's use habit into
Row focusing, so that focusing more intelligence meets the needs of user individual, improve user experience with efficiently.
More than, focusing method, focusing mechanism and electronic equipment according to an embodiment of the present invention are described referring to figs. 1 to Fig. 4.
It should be noted that in the present specification, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
In addition, it should be noted that, in the present specification, the statement of similar " Unit first ... ", " Unit second ... " is only
In order to facilitate differentiation in description, and it is not meant to it and must be implemented as two or more units of physical separation.In fact,
As needed, the unit can be entirely implemented as a unit, also can be implemented as multiple units.
Finally, it is to be noted that, it is above-mentioned it is a series of processing not only include with sequence described here in temporal sequence
The processing of execution, and the processing including executing parallel or respectively rather than in chronological order.
Through the above description of the embodiments, those skilled in the art can be understood that the present invention can be by
Software adds the mode of required hardware platform to realize, naturally it is also possible to all be implemented by hardware.Based on this understanding,
Technical solution of the present invention can be embodied in the form of software products in whole or in part to what background technique contributed,
The computer software product can store in storage medium, such as ROM/RAM, magnetic disk, CD, including some instructions are to make
It obtains a computer equipment (can be personal computer, server or the network equipment etc.) and executes each embodiment of the present invention
Or method described in certain parts of embodiment.
In embodiments of the present invention, units/modules can use software realization, to be executed by various types of processors.
For example, the executable code module of a mark may include the one or more physics or logic of computer instruction
Block, for example, it can be built as object, process or function.Nevertheless, the executable code of institute's mark module is not necessarily to
It is physically located together, but may include the different instructions being stored in different positions, combined when in these command logics
When together, Component units/module and the regulation purpose for realizing the units/modules.
When units/modules can use software realization, it is contemplated that the level of existing hardware technique, it is possible to software
The units/modules of realization, without considering the cost, those skilled in the art can build corresponding hardware circuit
Realize corresponding function, the hardware circuit includes conventional ultra-large integrated (VLSI) circuit or gate array and such as
The existing semiconductor of logic chip, transistor etc either other discrete elements.Module can also be set with programmable hardware
Standby, field programmable gate array, programmable logic array, programmable logic device etc. are realized.
The present invention is described in detail above, specific case used herein is to the principle of the present invention and embodiment party
Formula is expounded, and the above description of the embodiment is only used to help understand the method for the present invention and its core ideas;Meanwhile it is right
In those of ordinary skill in the art, according to the thought of the present invention, change is had in specific embodiments and applications
Place, in conclusion the contents of this specification are not to be construed as limiting the invention.
Claims (12)
1. a kind of focusing method, comprising:
Obtain preview image;
Photographed scene and object are identified from the preview image;
Based on the photographed scene and the object, at least one in the set of the first parameter and the set of the second parameter is obtained
A, first parameter indicates possibility of the specific region as focusing area under the photographed scene in the preview image
Property, second parameter indicates a possibility that region under the photographed scene where the object is as focusing area;
Based on described in first parameter and second parameter at least one, determined by pre-set algorithm described in
Focusing area in preview image;And
Based on the focusing area, focus.
2. focusing method as described in claim 1, wherein
The set of first parameter is obtained by following steps:
Multiple training images and respectively multiple trained focusing areas corresponding with each training image are obtained, it is the multiple
Training focusing area includes trained focusing area corresponding with the specific region;
For each training image, Training scene corresponding to the training image, the instruction are identified from Training scene set
Practicing scene set includes Training scene corresponding with the photographed scene;And
Based on the Training scene identified to each training image and training focusing corresponding with training image area
Domain calculates the set for indicating first parameter of the relevance between each Training scene and each trained focusing area;
Wherein, the set of second parameter is obtained by following steps:
Obtain multiple training images and respectively multiple trained focusing areas corresponding with each trained object;
For each training image, Training scene corresponding to the training image, the instruction are identified from Training scene set
Practicing scene set includes Training scene corresponding with the photographed scene;
For each training image, is identified from training object set and be located at training focusing corresponding with training image area
Training object at domain, the trained object set include trained object corresponding with the object;And
Training pair based on the Training scene identified to each training image and at corresponding training focusing area
As calculating the set for indicating second parameter of the relevance between each Training scene and each trained object.
3. focusing method as claimed in claim 2, wherein
The step of calculating the set of first parameter include:
Count the number that each trained focusing area occurs under each Training scene;And
Under each Training scene, for each trained focusing area, the number that is occurred based on the trained focusing area and each
Ratio between the summation for the number that a trained focusing area occurs determines that the Training scene and the trained focusing area close
The probability of connection, to form the set of first parameter;
The step of calculating the set of second parameter include:
Count the number that each trained object occurs under each Training scene;And
Under each Training scene, for each trained object, the number occurred based on the trained object and each training pair
As the number of appearance summation between ratio, determine the Training scene and the trained associated probability of object, with formed
The set of second parameter.
4. focusing method as described in claim 1, further includes:
Feature is extracted from the preview image;And
Based on extracted feature, the photographed scene and the object are identified from the preview image.
5. focusing method as described in claim 1, further includes:
Receive the input operation that focusing area is specified from the preview image;
Specified focusing area is determined as the focusing area in the preview image;And
Based on specified focusing area, at least one in the set of first parameter and the set of second parameter is updated
It is a.
6. focusing method as described in claim 1, wherein the set of first parameter and the set category of second parameter
The first user parameter information in multiple user parameter informations, the method also includes:
Receive user information corresponding with the first user;
Also, based on described in first parameter and second parameter at least one, it is true by pre-set algorithm
The step of focusing area in the fixed preview image includes:
Based on the user information, the first user parameter information is determined from the multiple user parameter information;And
Based at least one and the user information described in first parameter and second parameter, by preparatory
The algorithm of setting determines the focusing area in the preview image.
7. a kind of electronic equipment, comprising:
Acquiring unit obtains preview image;
Processing unit identifies photographed scene and object from the preview image;Based on the photographed scene and the object, obtain
The set of the first parameter and at least one of the set of the second parameter are taken, first parameter indicates under the photographed scene
A possibility that specific region in the preview image is as focusing area, second parameter indicate under the photographed scene
A possibility that region where the object is as focusing area;Described in first parameter and second parameter
At least one, determines the focusing area in the preview image by pre-set algorithm;And
Focusing unit is based on the focusing area, focuses.
8. electronic equipment as claimed in claim 7,
Wherein, the processing unit is pre-configured with are as follows:
Multiple training images and respectively multiple trained focusing areas corresponding with each training image are obtained, it is the multiple
Training focusing area includes trained focusing area corresponding with the specific region;
For each training image, Training scene corresponding to the training image, the instruction are identified from Training scene set
Practicing scene set includes Training scene corresponding with the photographed scene;And
Based on the Training scene identified to each training image and training focusing corresponding with training image area
Domain calculates the set for indicating first parameter of the relevance between each Training scene and each trained focusing area;
The processing unit is also pre-configured with are as follows:
Obtain multiple training images and respectively multiple trained focusing areas corresponding with each trained object;
For each training image, Training scene corresponding to the training image, the instruction are identified from Training scene set
Practicing scene set includes Training scene corresponding with the photographed scene;
For each training image, is identified from training object set and be located at training focusing corresponding with training image area
Training object at domain, the trained object set include trained object corresponding with the object;And
Training pair based on the Training scene identified to each training image and at corresponding training focusing area
As calculating the set for indicating second parameter of the relevance between each Training scene and each trained object.
9. electronic equipment as claimed in claim 8, wherein
The processing unit is also pre-configured with are as follows:
Count the number that each trained focusing area occurs under each Training scene;And
Under each Training scene, for each trained focusing area, the number that is occurred based on the trained focusing area and each
Ratio between the summation for the number that a trained focusing area occurs determines that the Training scene and the trained focusing area close
The probability of connection, to form the set of first parameter;
The processing unit is also pre-configured with are as follows:
Count the number that each trained object occurs under each Training scene;And
Under each Training scene, for each trained object, the number occurred based on the trained object and each training pair
As the number of appearance summation between ratio, determine the Training scene and the trained associated probability of object, with formed
The set of second parameter.
10. electronic equipment as claimed in claim 7, the processing unit is additionally configured to: being extracted from the preview image special
Sign;And it is based on extracted feature, the photographed scene and the object are identified from the preview image.
11. electronic equipment as claimed in claim 7, further includes:
Input unit receives the input operation that focusing area is specified from the preview image;
The processing unit is additionally configured to:
Specified focusing area is determined as the focusing area in the preview image;And
Based on specified focusing area, at least one in the set of first parameter and the set of second parameter is updated
It is a.
12. electronic equipment as claimed in claim 7, wherein the set of first parameter and the set of second parameter
Belong to the first user parameter information in multiple user parameter informations, the electronic equipment further include:
Input unit receives user information corresponding with the first user;
Also, the processing unit is additionally configured to:
Based on the user information, the first user parameter information is determined from the multiple user parameter information;And
Based at least one and the user information described in first parameter and second parameter, described is determined
In one parameter and second parameter it is described at least one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610082815.8A CN105791674B (en) | 2016-02-05 | 2016-02-05 | Electronic equipment and focusing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610082815.8A CN105791674B (en) | 2016-02-05 | 2016-02-05 | Electronic equipment and focusing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105791674A CN105791674A (en) | 2016-07-20 |
CN105791674B true CN105791674B (en) | 2019-06-25 |
Family
ID=56402700
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610082815.8A Active CN105791674B (en) | 2016-02-05 | 2016-02-05 | Electronic equipment and focusing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105791674B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109963072B (en) * | 2017-12-26 | 2021-03-02 | Oppo广东移动通信有限公司 | Focusing method, focusing device, storage medium and electronic equipment |
CN108282608B (en) * | 2017-12-26 | 2020-10-09 | 努比亚技术有限公司 | Multi-region focusing method, mobile terminal and computer readable storage medium |
CN108712609A (en) * | 2018-05-17 | 2018-10-26 | Oppo广东移动通信有限公司 | Focusing process method, apparatus, equipment and storage medium |
CN109495689B (en) * | 2018-12-29 | 2021-04-13 | 北京旷视科技有限公司 | Shooting method and device, electronic equipment and storage medium |
CN109951647A (en) * | 2019-01-23 | 2019-06-28 | 努比亚技术有限公司 | A kind of acquisition parameters setting method, terminal and computer readable storage medium |
CN110290324B (en) * | 2019-06-28 | 2021-02-02 | Oppo广东移动通信有限公司 | Device imaging method and device, storage medium and electronic device |
CN110572573B (en) * | 2019-09-17 | 2021-11-09 | Oppo广东移动通信有限公司 | Focusing method and device, electronic equipment and computer readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103491299A (en) * | 2013-09-17 | 2014-01-01 | 宇龙计算机通信科技(深圳)有限公司 | Photographic processing method and device |
CN103905729A (en) * | 2007-05-18 | 2014-07-02 | 卡西欧计算机株式会社 | Imaging device and program thereof |
CN104092936A (en) * | 2014-06-12 | 2014-10-08 | 小米科技有限责任公司 | Automatic focusing method and apparatus |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130085316A (en) * | 2012-01-19 | 2013-07-29 | 한국전자통신연구원 | Apparatus and method for acquisition of high quality face image with fixed and ptz camera |
CN105120153B (en) * | 2015-08-20 | 2018-01-19 | 广东欧珀移动通信有限公司 | A kind of image capturing method and device |
-
2016
- 2016-02-05 CN CN201610082815.8A patent/CN105791674B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103905729A (en) * | 2007-05-18 | 2014-07-02 | 卡西欧计算机株式会社 | Imaging device and program thereof |
CN103491299A (en) * | 2013-09-17 | 2014-01-01 | 宇龙计算机通信科技(深圳)有限公司 | Photographic processing method and device |
CN104092936A (en) * | 2014-06-12 | 2014-10-08 | 小米科技有限责任公司 | Automatic focusing method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN105791674A (en) | 2016-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105791674B (en) | Electronic equipment and focusing method | |
Li et al. | Probabilistic multi-task learning for visual saliency estimation in video | |
CN109492612B (en) | Fall detection method and device based on bone points | |
CN104994426B (en) | Program video identification method and system | |
US8953895B2 (en) | Image classification apparatus, image classification method, program, recording medium, integrated circuit, and model creation apparatus | |
US8724910B1 (en) | Selection of representative images | |
EP3147817A1 (en) | Method and apparatus for video and image match searching | |
CN101425133B (en) | Human image retrieval system | |
Thomas et al. | Perceptual video summarization—A new framework for video summarization | |
CN108197336B (en) | Video searching method and device | |
US20180137630A1 (en) | Image processing apparatus and method | |
WO2019007020A1 (en) | Method and device for generating video summary | |
Voulodimos et al. | Improving multi-camera activity recognition by employing neural network based readjustment | |
CN110176024A (en) | Method, apparatus, equipment and the storage medium that target is detected in video | |
CN110765903A (en) | Pedestrian re-identification method and device and storage medium | |
JP2021515321A (en) | Media processing methods, related equipment and computer programs | |
CN106056138A (en) | Picture processing device and method | |
CN113010736B (en) | Video classification method and device, electronic equipment and storage medium | |
CN111611993A (en) | Method and device for identifying volume of food in refrigerator and computer storage medium | |
CN111191065B (en) | Homologous image determining method and device | |
CN110472537B (en) | Self-adaptive identification method, device, equipment and medium | |
Deotale et al. | Optimized hybrid RNN model for human activity recognition in untrimmed video | |
US9888161B2 (en) | Generation apparatus and method for evaluation information, electronic device and server | |
CN113705666B (en) | Split network training method, use method, device, equipment and storage medium | |
CN112989115B (en) | Screening control method and device for video to be recommended |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |