Nothing Special   »   [go: up one dir, main page]

CN106791809B - A kind of light measuring method and mobile terminal - Google Patents

A kind of light measuring method and mobile terminal Download PDF

Info

Publication number
CN106791809B
CN106791809B CN201611151717.1A CN201611151717A CN106791809B CN 106791809 B CN106791809 B CN 106791809B CN 201611151717 A CN201611151717 A CN 201611151717A CN 106791809 B CN106791809 B CN 106791809B
Authority
CN
China
Prior art keywords
region
target
light
preview image
photometry region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611151717.1A
Other languages
Chinese (zh)
Other versions
CN106791809A (en
Inventor
殷求明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Vivo Mobile Communication Co Ltd Beijing Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd, Vivo Mobile Communication Co Ltd Beijing Branch filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201611151717.1A priority Critical patent/CN106791809B/en
Publication of CN106791809A publication Critical patent/CN106791809A/en
Application granted granted Critical
Publication of CN106791809B publication Critical patent/CN106791809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present invention provides a kind of light measuring method and mobile terminals, and wherein method is applied to the mobile terminal including the first camera and second camera, including:First camera acquisition preview image export to take pictures preview interface during, obtain the target photometry region of the preview image;Obtain the target field depth where the target photometry region that the second camera detects;Based on the target field depth, the light metering weight of the preview image is adjusted;Based on the light metering weight adjusted, survey light is carried out to the preview image, the accuracy for surveying light is improved, obtains the photometric data of more realistic imaging demand, to realize subsequent reasonable exposure, guarantees effect of taking pictures.

Description

A kind of light measuring method and mobile terminal
Technical field
The present invention relates to field of communication technology more particularly to a kind of light measuring methods and mobile terminal.
Background technique
The photometric system of digital camera is light intensity --- the TTL that subject reflection is measured by camera lens (Through The Lens) surveys light.It surveys optical inductor to be placed in light path of photography, light is first from mirror reflection to light is surveyed Survey light is carried out on part.The light intensity data measured is transferred in the processor of camera, after the calculating process of automatic exposure, certainly Make exposure combination, that is, the aperture shot is combined with shutter speed, and the diaphragm shutter value then provided by camera is taken pictures, The accurate photo of exposure can be shot.
According to optical inductor is surveyed to the different zones, size and the difference for calculating weight that measure in viewfinder range, survey Light mode can be divided into average metering, central heavy spot light-metering, spot light-metering, matrix and survey light isotype.
Wherein, the photometric range of spot light-metering is the region that view finder picture center accounts for entire picture about 1%-3% area, point Survey light is not influenced substantially by scene brightness other outside photometry region, therefore can be easily using spot light-metering to subject Or each region of background is detected, and spot light-metering sensitivity with higher and precision, can fully show entire picture The light in face reacts.With the maturation and extensive use of touch screen, on camera and intelligent terminal equipped with touch screen, Yong Huke To select the specific region of spot light-metering by touch screen, rather than central location must be carried out, this is also greatly improved a little Survey the effect and ease for use of light.
In the related technology, when carrying out survey light by the way of spot light-metering, survey luminous point need to be first confirmd that, then with the survey luminous point Centered on, a rectangle or border circular areas are done, the rectangle or border circular areas account for the area of the 1%-3% of entire preview interface.So And this Exposure Metering directly carries out survey light using a selection area as photometry region, for also needing except selection area The survey light of the part clearly shown and exposure cause bad influence, and the picture taken is not that overexposure is exactly to owe to expose, and cause Serious exposure error is not able to satisfy picture photographing demand.
Summary of the invention
A kind of light measuring method and mobile terminal are provided in the embodiment of the present invention, be easy to cause exposure to solve existing survey light Error, the problem of not being able to satisfy picture photographing demand.
On the one hand, the embodiment of the present invention provides a kind of light measuring method, is applied to include the first camera and second camera Mobile terminal, including:
First camera acquisition preview image export to take pictures preview interface during, obtain the preview graph The target photometry region of picture;
Obtain the target field depth where the target photometry region that the second camera detects;
Based on the target field depth, the light metering weight of the preview image is adjusted;
Based on the light metering weight adjusted, survey light is carried out to the preview image.
On the other hand, the embodiment of the present invention also provides a kind of mobile terminal, including the first camera and second camera, also Including:
First obtains module, for exporting in first camera acquisition preview image to the process for preview interface of taking pictures In, obtain the target photometry region of the preview image;
Second obtains module, obtains described in module acquisition for obtaining the second camera detects described first Target field depth where target photometry region;
Module is adjusted, for obtaining the target field depth that module obtains based on described second, adjusts the preview The light metering weight of image;
Optical module is surveyed, for being based on the adjustment module light metering weight adjusted, the preview image is carried out Survey light.
In this way, by the first camera acquire preview image export to take pictures preview interface during, it is pre- to obtain this It lookes at the target photometry region of image, and obtains the target field depth where the target photometry region that second camera detects, Based on the target field depth, the light metering weight of preview image is adjusted, final realize is based on light metering weight adjusted, to preview Image carries out survey light, the process, by the cooperation between two cameras, obtains target photometry region in preview image and image The target field depth at place the survey light of whole preview image is weighed based on the field depth where target photometry region It is adjusted again, the survey light strategy of entire scope is determined with the field depth information of subrange, realize in preview image Spatial position carries out differentiated survey light operation there are the reference object of difference, with differentiated light metering weight, guarantees to survey light Accuracy is promoted and surveys light effect.
Detailed description of the invention
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Fig. 1 shows the flow charts of light measuring method in first embodiment of the invention;
Fig. 2 indicates the flow chart of light measuring method in second embodiment of the invention;
Fig. 3 indicates the flow chart of light measuring method in third embodiment of the invention;
Fig. 4 indicates the flow chart of light measuring method in fourth embodiment of the invention;
Fig. 5 indicates the structural block diagram one of mobile terminal in fifth embodiment of the invention;
Fig. 6 indicates the structural block diagram two of mobile terminal in fifth embodiment of the invention;
Fig. 7 indicates the structural block diagram of mobile terminal in sixth embodiment of the invention;
Fig. 8 shows the structural block diagrams of mobile terminal in seventh embodiment of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
First embodiment
A kind of light measuring method is disclosed in the present embodiment, applied to the mobile end including the first camera and second camera End, as shown in connection with fig. 1, including:
Step 101:The first camera acquisition preview image export to take pictures preview interface during, obtain the preview The target photometry region of image.
When camera is opened, mobile terminal is in preview interface of taking pictures, and camera can constantly adopt external sights Collection obtains preview image, exports to preview interface of taking pictures, which is shown that this first in preview interface of taking pictures Camera acquisition preview image, which is exported to the process for preview interface of taking pictures, including the first camera, is acquired external sights, Processing obtains preview image, conveys preview image to display screen, and by the pre- of the first camera acquisition in preview interface of taking pictures The process that image of looking at is shown.
Wherein, which is a part of display area in preview interface.The acquisition of the target photometry region It can be the automatic acquisition process of mobile terminal, or instructed according to the selection of user and realize acquisition process.Based on the target Photometry region is to carry out next survey light processing.
Specifically, as a preferred embodiment, the step of the target photometry region of the acquisition preview image, including: Mobile terminal user is detected in the clicking operation in preview interface of taking pictures;When detecting clicking operation, clicking operation is obtained Position;The region of the preset first range of position in the preview image including the clicking operation is determined as the target Photometry region.
The process corresponds specifically to user and exports in the first camera acquisition preview image to preview interface of taking pictures, takes pictures When showing preview image in preview interface, carry out clicking the case where choosing target area, this time shift in preview interface of taking pictures Dynamic terminal is detecting that when taking pictures the clicking operation in preview interface, the area target Ce Guang selected by user is can be obtained in user Domain, the generation position of clicking operation described in the target photometry region, clicks to obtain target area automatically according to user, makes target The actual demand for being determined for compliance with user of photometry region.
Specifically, as another preferred embodiment, the step of the target photometry region of the acquisition preview image, packet It includes:The region of preset second range in preview image is determined as target photometry region.
The process specially directly carries out area locking in the preview image of target photometry region, the area target Ce Guang Domain specifically can be a preset default zone in systems, and the region of the preset second range can be for display screen curtain just Central area, to meet conventional photographing operation, the automatic selection area of system keeps survey photoreduction process simple to use.
Specifically, as another preferred embodiment, the step of the target photometry region of the acquisition preview image, packet It includes:Face datection is carried out to preview image;When detecting face, the region where face is determined as target photometry region.
The process, specific to survey light demand when user is to portrait, directly by human face region in preview image It can be automatically and efficiently main with human face region when can make in reference object comprising personage as target photometry region Parameter obtains region, carries out next comparison matching process, according to the parameter of the human face region with people captured by user Object is attached most importance to, and is realized final face exposure enhancing, is maximized guarantee human face region and obtain good survey light effect, with into One step ensures the final good shooting effect of human face region, meets user demand.
Step 102:Obtain the target field depth where the target photometry region that second camera detects.
The realization of the process, especially by both second camera and the first camera cooperate, second camera by itself The image data absorbed combines image data obtained in the first camera acquisition preview image process, can pass through two cameras The method of triangulation location between made thing body obtains the depth of field parameter information of subject in preview image, and then available Target field depth in preview image where target photometry region.
Step 103:Based on the target field depth, the light metering weight of preview image is adjusted.
Since the reference object shown in preview image spatially has a position difference all around, on different location The depth of field value of reference object there are difference.Wherein, the target field depth and shooting pair shown in target photometry region As corresponding, shown reference object is a part of reference object shown in preview image in target photometry region, Based on the field depth where a part of region, to be adjusted to the light metering weight of whole preview image, with subrange Field depth information determine the survey light strategy of entire scope, realize that there are the shootings of difference to preview image spatial location Object carries out differentiated survey light operation with differentiated light metering weight, guarantees the accuracy for surveying light, is promoted and survey light effect.
Step 104:Based on light metering weight adjusted, survey light is carried out to preview image.
It is being based on the target field depth, the light metering weight of the preview image after being adjusted is held according to the light metering weight Row operates the final survey light of preview image.
Light measuring method in the embodiment of the present invention is exported by acquiring preview image in the first camera to preview circle of taking pictures During face, the target photometry region of the preview image is obtained, and obtains the target photometry region that second camera detects The target field depth at place is based on the target field depth, adjusts the light metering weight of preview image, final to realize based on adjustment Light metering weight afterwards carries out survey light to preview image, which obtains preview image by the cooperation between two cameras And the target field depth at the place of target photometry region is come pair based on the field depth where target photometry region in image The light metering weight of whole preview image is adjusted, and the survey light plan of entire scope is determined with the field depth information of subrange Slightly, it realizes differentiated there are the reference object of difference, with the progress of differentiated light metering weight to preview image spatial location Light operation is surveyed, guarantees the accuracy for surveying light, is promoted and survey light effect, the photometric data of more realistic imaging demand is obtained, with reality Existing subsequent reasonable exposure, guarantees effect of taking pictures.
Second embodiment
A kind of light measuring method is disclosed in the present embodiment, applied to the mobile end including the first camera and second camera End, as shown in connection with fig. 2, including:
Step 201:The first camera acquisition preview image export to take pictures preview interface during, obtain preview graph The target photometry region of picture.
When camera is opened, mobile terminal is in preview interface of taking pictures, and camera can constantly adopt external sights Collection obtains preview image, exports to preview interface of taking pictures, which is shown that this first in preview interface of taking pictures Camera acquisition preview image, which is exported to the process for preview interface of taking pictures, including the first camera, is acquired external sights, Processing obtains preview image, conveys preview image to display screen, and by the pre- of the first camera acquisition in preview interface of taking pictures The process that image of looking at is shown.
Wherein, which is a part of display area in preview interface of taking pictures.The target photometry region The automatic acquisition process that can be mobile terminal is obtained, or is instructed according to the selection of user and realizes acquisition process.Based on this Target photometry region is to carry out next survey light processing.
Step 202:Obtain the target field depth where the target photometry region that second camera detects.
The realization of the process, especially by both second camera and the first camera cooperate, second camera by itself The image data absorbed combines image data obtained in the first camera acquisition preview image process, can pass through two cameras The method of triangulation location between made thing body obtains the depth of field parameter information of subject in preview image, and then available Target field depth in preview image where target photometry region.
Step 203:Control the field depth of all pixels point in the fringe region in second camera detection preview image.
Wherein, which is:All image-regions in addition to target photometry region.
The realization of the process, especially by both second camera and the first camera cooperate, second camera by itself The image data absorbed combines image data obtained in the first camera acquisition preview image process, can pass through two cameras The method of triangulation location between made thing body obtains the depth of field parameter information of subject in preview image, and then available The corresponding field depth of pixel in all image-regions in preview image in addition to target photometry region.
Step 204:In the edge region, at least one object edge area that field depth is the target field depth is extracted Domain.
The step is carried out in the fringe region of preview image based on the target field depth where target photometry region The object edge region that field depth is the target field depth is extracted in matching, each at least one object edge region The corresponding depth of field numerical value of pixel is in the target field depth.
Step 205:Target photometry region and at least one object edge region are determined as center photometry region.
The process is extracted obtain the depth of field in the edge region according to the target field depth where target photometry region Range is the target photometry region and at least one object edge behind at least one object edge region of the target field depth Region becomes center photometry region.The center photometry region is to be in identical field depth (target field depth) in preview image The region that is constituted of pixel.
The process, using the corresponding target field depth of the target photometry region selected in preview image as comparison condition, Comparison matches field depth corresponding to the pixel in which region in the display image of entire preview interface What deep range was consistent, the display area being consistent with the target field depth is center photometry region.
For example, corresponding to the kitten and doggie of identical field depth in such as preview image, user clicks kitten, So by kitten and doggie all centered on photometry region, the big survey light of weight can be carried out to center photometry region.By right Choosing for one region, can automatically determine the image-region of field depth identical as this region, complete to center The determination and division of photometry region improve the accuracy for surveying photoreduction process and intelligence.
Step 206:All image-regions in preview image in addition to the photometry region of center are determined as to assist photometry region.
Accordingly, which is similarly a part in preview image, after determining center photometry region, And then obtaining residual image region in preview image is auxiliary photometry region.
Step 207:The light metering weight of adjustment center photometry region and auxiliary photometry region.
Specifically, the light metering weight that the auxiliary photometry region is arranged is less than the weight of the center photometry region.
Wherein, the value range of light metering weight is 0-100%.The determination of light metering weight, the difference for being obtained to division Different light metering weights is arranged in region, with the survey light for stress to preview image, guarantees the coke such as face or focusing of taking pictures The important areas such as the shooting picture at point can have the photometric data for rationally, having stressing property and the practical imaging demand of more satisfaction, with It realizes subsequent reasonable exposure, guarantees effect of taking pictures.
Step 208:Based on light metering weight adjusted, survey light is carried out to preview image.
It is being based on the target field depth, the light metering weight of the preview image after being adjusted is held according to the light metering weight Row operates the survey light of center photometry region and auxiliary photometry region in preview image.
Wherein, survey to preview image the process of light based on light metering weight adjusted, be specially based on preview image In the different areas the corresponding luminance information of pixel come right of execution resurvey light operation, acquire in the different zones determined respectively Average brightness in different zones is weighted by the corresponding brightness value of pixel with the corresponding light metering weight ratio in each region It calculates, obtains photometry result, to realize subsequent luminance compensation, improve the accuracy for surveying light.
Further, wherein adjustment center photometry region and the light metering weight for assisting photometry region are based on survey adjusted Light weight survey to preview image the realization process of light, is corresponding with different realization processes.
On the one hand, the step of light metering weight of the adjustment center photometry region and auxiliary photometry region, including:Adjustment center The light metering weight of photometry region is 100%;The light metering weight of adjustment auxiliary photometry region is 0.
Then, it should be based on the light metering weight adjusted, survey to preview image the step of light, including:According to center The light metering weight 100% of photometry region carries out survey light to the center photometry region of preview image.
In that case, according to 100% light metering weight, survey light is carried out to center photometry region, completely with preview All regions for meeting the target field depth where target photometry region carry out survey light in image, and ignoring field depth is other The display area of range survey light operation to focus on performance center photometry region, meets taking pictures under specific condition and survey light It needs.
On the one hand, the step of light metering weight of the adjustment center photometry region and auxiliary photometry region, including:Described in adjustment The light metering weight of center photometry region is a%;The light metering weight for adjusting the auxiliary photometry region is b%.
Then, it should be based on light metering weight adjusted, survey to preview image the step of light, including:According to the center The light metering weight b% of the light metering weight a% of photometry region and the auxiliary photometry region, to the center photometry region and described Auxiliary photometry region carries out survey light.
Wherein, 50<a<100, a+b=100, i.e. the value range of light metering weight is 0-100%, by the figure in preview image As region is divided into two, the sum of two-part light metering weight is 100%.Wherein, the light metering weight of center photometry region is greater than auxiliary Help the light metering weight of photometry region.Preferably, it is preferably 20 that a value, which is preferably 80, b value,.
The process realizes the division for emphasis of being had any different, had to preview image, implements to survey light mistake according to different weights Journey preferably can carry out survey light according to the different zones in display picture, improve the accuracy for surveying light, obtain more realistic The photometric data of imaging demand guarantees effect of taking pictures to realize subsequent reasonable exposure.
Light measuring method in the embodiment of the present invention is exported by acquiring preview image in the first camera to preview circle of taking pictures During face, the target photometry region of the preview image is obtained, and obtains the target photometry region that second camera detects The field depth of all pixels point in the target field depth and fringe region at place, to determine that center photometry region and auxiliary are surveyed Light region and its corresponding light metering weight, final realize are based on light metering weight adjusted, carry out survey light to preview image, should Process obtains the target scape at the place of target photometry region in preview image and image by the cooperation between two cameras Deep range, based on the field depth where target photometry region, to be adjusted to the light metering weight of whole preview image, with office The field depth information of portion's range determines the survey light strategy of entire scope, realizes that there are difference to preview image spatial location Reference object, the operation of differentiated surveys light is carried out with differentiated light metering weight, guarantee the accuracy for surveying light, promote survey light efficiency Fruit.
3rd embodiment
A kind of light measuring method is disclosed in the present embodiment, applied to the mobile end including the first camera and second camera End, as shown in connection with fig. 3, including:
Step 301:The first camera acquisition preview image export to take pictures preview interface during, obtain preview graph The target photometry region of picture.
When camera is opened, mobile terminal is in preview interface of taking pictures, and camera can constantly adopt external sights Collection obtains preview image, exports to preview interface of taking pictures, which is shown that this first in preview interface of taking pictures Camera acquisition preview image, which is exported to the process for preview interface of taking pictures, including the first camera, is acquired external sights, Processing obtains preview image, conveys preview image to display screen, and by the pre- of the first camera acquisition in preview interface of taking pictures The process that image of looking at is shown.
Step 302:Obtain the target field depth where the target photometry region that second camera detects.
The realization of the process, especially by both second camera and the first camera cooperate, second camera by itself The image data absorbed combines image data obtained in the first camera acquisition preview image process, can pass through two cameras The method of triangulation location between made thing body obtains the depth of field parameter information of subject in preview image, and then available Target field depth in preview image where target photometry region.
Step 303:Based on target field depth, the average depth of field c of all pixels point in target photometry region is calculated.
Wherein, there are many pixels in the corresponding target photometry region of the target field depth, each pixel is corresponding There is the depth of field numerical value of itself shown portion in preview image, averages to the addition of those pixels corresponding depth of field numerical value To the average depth of field, specifically can be by asking mathematical mean to obtain.
Step 304:Control the field depth of all pixels point in the fringe region in second camera detection preview image.
Wherein, which is:All image-regions in addition to target photometry region.
The realization of the process, especially by both second camera and the first camera cooperate, second camera by itself The image data absorbed combines image data obtained in the first camera acquisition preview image process, can pass through two cameras The method of triangulation location between made thing body obtains the depth of field parameter information of subject in preview image, and then available The corresponding field depth of pixel in all image-regions in preview image in addition to target photometry region.
Step 305:In target photometry region and fringe region, successively extracting field depth is [c-w1,c+w1]、[c- w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] image Region.
Wherein, 0≤w1<w2<w3……<wn<wm
The process surveys light according to the depth of view information of pixel to carry out weight.All pixels point in target photometry region The average depth of field is c, and specifically, for example, highest survey light weight can be assigned the pixel that the depth of field is c on surveying light, the depth of field is big In c-5, less than the pixel of c+5, time high light metering weight is assigned on surveying light, and so on, wherein the corresponding scape of pixel Gap between deep numerical value and c is bigger, and pixel corresponding region is smaller in the weight for surveying light.
Step 306:Adjusting separately field depth is [c-w1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c- w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] image-region light metering weight be m1%, m2%, m3% ... mn%.
Wherein, mn<……<m3<m2<m1, m1%+m2%+m3%+ ...+mn%=100%.
Specifically, the w1Value can be 0, specifically can be, and be 0 when differing numerical value in preview image between the average depth of field When, it is maximum that the corresponding area metering weight of those pixels is set;Work as w1Value is 0, w2When value is 5, correspond to field depth For the image-region of [c-5, c) ∪ (c, c+5], differed between the depth of field numerical value of pixel and the average depth of field in the image-region at this time Numerical value is in the first range, i.e., [- 5,0) and ∪ (0,5], the corresponding image-region of the field depth assigns secondary high on surveying light Light metering weight;Similarly, numerical value is differed in preview image between the average depth of field and is in the second range (such as -10~-5 and 5 ~10) pixel corresponding region assigns smaller light metering weight on surveying light, wherein all numerical value is absolute in the second range Value is all larger than the absolute value of all numerical value of the first range, and so on, light metering weight is reduced, preferably to carry out according to main body Light is surveyed, the accuracy for surveying light is improved.
For example, including kitten and thick grass in such as preview image, user clicks kitten, then calculating kitten institute The average depth of field c of all pixels point in the zone is carrying out the depth of field in image range where image range where kitten and thick grass Preview image can be divided into the pixel region that the depth of field is c and highest light metering weight is arranged, by the depth of field and averagely by the matching of range The difference of the depth of field is in the pixel region between 0 c≤5 < and time high weight is arranged, and the difference of the depth of field and the average depth of field is in 5 Smaller weight is arranged in pixel region between c≤10 <, and so on.It, can be by this by being chosen to a region The average depth of field realizes the region division to preview image automatically in region, improves the accuracy for surveying photoreduction process and intelligence.
Step 307:It is [c-w according to field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪ (c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the corresponding light metering weight m of image-region1%, m2%, m3% ... mn% is [c-w to field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2, c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] each image-region carry out survey light.
It is being based on the target field depth, the average depth of field of all pixels point in target photometry region is calculated, is being extracted The different images region in target photometry region and fringe region is obtained, and different survey light is adjusted to different images region and is weighed Weight finally implements different images region according to different light metering weights to survey light operation, the process, and realization has preview image The division for distinguishing, having emphasis is implemented to survey photoreduction process according to different weights, can be preferably according to the different zones in display picture Survey light is carried out, the accuracy for surveying light is improved, the photometric data of more realistic imaging demand is obtained, to realize subsequent reasonable exposure Light guarantees effect of taking pictures.
Light measuring method in the embodiment of the present invention surveys light by the target photometry region and target that obtain the preview image Target field depth where region, is calculated the average depth of field of all pixels point in target photometry region, and extraction obtains mesh The different images region in photometry region and fringe region is marked, and adjusts different light metering weights to different images region, finally It realizes and is based on light metering weight adjusted, survey light is carried out to preview image, the process, by the cooperation between two cameras, The target field depth at the place of target photometry region in preview image and image is obtained, based on the scape where target photometry region Deep range determines entirety with the field depth information of subrange being adjusted to the light metering weight of whole preview image The survey light strategy of range is realized to preview image spatial location there are the reference object of difference, with differentiated light metering weight Differentiated survey light operation is carried out, guarantees the accuracy for surveying light, is promoted and survey light effect.
Fourth embodiment
A kind of light measuring method is disclosed in the present embodiment, applied to the mobile end including the first camera and second camera End, as shown in connection with fig. 4, including:
Step 401:The first camera acquisition preview image export to take pictures preview interface during, obtain preview graph The target photometry region of picture.
When camera is opened, mobile terminal is in preview interface of taking pictures, and camera can constantly adopt external sights Collection obtains preview image, exports to preview interface of taking pictures, which is shown that this first in preview interface of taking pictures Camera acquisition preview image, which is exported to the process for preview interface of taking pictures, including the first camera, is acquired external sights, Processing obtains preview image, conveys preview image to display screen, and by the pre- of the first camera acquisition in preview interface of taking pictures The process that image of looking at is shown.
Wherein, which is a part of display area in preview interface of taking pictures.The target photometry region The automatic acquisition process that can be mobile terminal is obtained, or is instructed according to the selection of user and realizes acquisition process.Based on this Target photometry region is to carry out next survey light processing.
Step 402:Obtain the target field depth where the target photometry region that second camera detects.
The realization of the process, especially by both second camera and the first camera cooperate, second camera by itself The image data absorbed combines image data obtained in the first camera acquisition preview image process, can pass through two cameras The method of triangulation location between made thing body obtains the depth of field parameter information of subject in preview image, and then available Target field depth in preview image where target photometry region.
Step 403:Based on target field depth, the average depth of field d of all pixels point in target photometry region is calculated.
Wherein, there are many pixels in the corresponding target photometry region of the target field depth, each pixel is corresponding There is the depth of field numerical value of itself shown portion in preview image, averages to the addition of those pixels corresponding depth of field numerical value To the average depth of field, specifically can be by asking mathematical mean to obtain.
Step 404:Determine target value range locating for average depth of field d.
In target photometry region is calculated after the average depth of field of all pixels point, determine that the average depth of field d is in Which target value range carries out the determination process of next light metering weight with the target value range according to locating for it.
Step 405:According to the corresponding relationship of preset value range and light metering weight, determine that target value range is corresponding Light metering weight s%.
The light metering weight is corresponding with value range locating for average depth of field d, and the target locating for determining average depth of field d takes After being worth range, from the corresponding relationship of value range and light metering weight, light metering weight corresponding with target value range is obtained.
Wherein, average depth of field d value is bigger, and s% value is also bigger, the size of average depth of field d and the size of light metering weight It is positively correlated, to increase the subsequent exposure compensating to corresponding region when depth of field numerical value is larger, improves effect of taking pictures.
Step 406:The light metering weight for determining the fringe region in preview image is t%.
The fringe region is:All image-regions in addition to target photometry region;Wherein s+t=100.The process, will be pre- The image-region look in image is divided into two, and the sum of two-part light metering weight is 100%.Preferably, s>T, i.e. s value are greater than The value of t.
For example, including kitten and thick grass in such as preview image, user clicks kitten, then calculating kitten institute The average depth of field c of all pixels point in the zone, if c is 5, the value range in 4≤c < 6, it is determined that the survey in kitten region Light weight is corresponding with 4≤c of range < 6 70%, and thick grass area metering weight is 30%;If c is 7, in taking for 6≤c < 8 It is worth range, it is determined that the light metering weight in kitten region is corresponding with 6≤c of range < 8 80%, and thick grass area metering weight is 20%;And so on.By being chosen to a region, can be determined by the value range where the depth of field average in the region The light metering weight in selected areas and other regions improves the reasonability for surveying light and intelligence.Step 407:According to the center area Ce Guang The light metering weight s% in the domain and light metering weight t% of fringe region, carries out survey light to center photometry region and fringe region.
Wherein, which is the target photometry region, the target depth of field where obtaining target photometry region Range, and the average depth of field d of all pixels point in target photometry region is calculated, determine that target locating for average depth of field d takes It is worth range, it, can be according to after the light metering weight for determining the corresponding light metering weight s% of target value range and fringe region is t% The light metering weight, which executes, operates the survey light of center photometry region and auxiliary photometry region in preview image.
Light measuring method in the embodiment of the present invention surveys light by the target photometry region and target that obtain the preview image Target field depth where region, is calculated the average depth of field of all pixels point in target photometry region, and then obtains mesh Light metering weight different in photometry region and fringe region is marked, final realize is based on light metering weight adjusted, to preview image Survey light is carried out, which obtains the institute of target photometry region in preview image and image by the cooperation between two cameras Target field depth, based on the field depth where target photometry region, to the light metering weight of whole preview image into Row adjustment, the survey light strategy of entire scope is determined with the field depth information of subrange, is realized to space in preview image Position carries out differentiated survey light operation there are the reference object of difference, with differentiated light metering weight, guarantees to survey the accurate of light Property, it is promoted and surveys light effect.
5th embodiment
The embodiment of the present invention discloses a kind of mobile terminal, is able to achieve light measuring method of the first embodiment into fourth embodiment Details, and reach identical effect.Including the first camera and second camera, referring to figs 5 and 6, further include:The One, which obtains module 501, second, obtains module 502, adjustment module 503 and surveys optical module 504.
First obtains module 501, for exporting in first camera acquisition preview image to preview interface of taking pictures In the process, the target photometry region of the preview image is obtained.
Second obtains module 502, for obtaining the second camera detects described first institute for obtaining module 501 State the target field depth where target photometry region.
Module 503 is adjusted, for obtaining the target field depth that module 502 obtains based on described second, adjusts institute State the light metering weight of preview image.
Optical module 504 is surveyed, for being based on the adjustment module 503 light metering weight adjusted, to the preview graph As carrying out survey light.
Wherein, the adjustment module 503 includes:First control submodule 5031, the first extracting sub-module 5032, first are really Stator modules 5033, second determine submodule 5034 and the first adjustment submodule 5035.
First control submodule 5031 detects fringe region in the preview image for controlling the second camera The field depth of middle all pixels point.
First extracting sub-module 5032 controls institute in the edge region, extracting first control submodule 5031 State at least one object edge region that the field depth that second camera detects is the target field depth.
First determines submodule 5033, for extracting the target photometry region and first extracting sub-module 5032 At least one described object edge region determine be center photometry region.
Second determines submodule 5034, determines what submodule 5033 determined for will remove described first in the preview image All image-regions outside the center photometry region are determined as assisting photometry region.
The first adjustment submodule 5035 determines the determining area the center Ce Guang of submodule 5033 for adjusting described first The light metering weight for the auxiliary photometry region that domain and the second determining submodule 5034 determine;Wherein, the fringe region For:All image-regions in addition to target photometry region.
Wherein, the first adjustment submodule 5035 includes:The first adjustment unit 50351 and second adjustment unit 50352.
The first adjustment unit 50351, the light metering weight for adjusting the center photometry region are 100%.
Second adjustment unit 50352, the light metering weight for adjusting the auxiliary photometry region is 0.
Then the survey optical module 504 includes:First surveys photonic module 5041.
First surveys photonic module 5041, for the light metering weight 100% according to the center photometry region, to the preview The center photometry region of image carries out survey light.
Wherein, the first adjustment submodule 5035 includes:Third adjustment unit 50353 and the 4th adjustment unit 50354.
Third adjustment unit 50353, the light metering weight for adjusting the center photometry region are a%.
4th adjustment unit 50354, the light metering weight for adjusting the auxiliary photometry region is b%.
Then the survey optical module 504 includes:Second surveys photonic module 5042.
Second survey photonic module 5042, for according to the center photometry region light metering weight a% and the auxiliary survey The light metering weight b% in light region carries out survey light to the center photometry region and the auxiliary photometry region;Wherein, 50<a< 100, a+b=100.
Wherein, the adjustment module 503 includes:First computational submodule 5036, the second control submodule 5037, second mention Take submodule 5038 and second adjustment submodule 5039.
First computational submodule 5036 calculates institute in the target photometry region for being based on the target field depth There is the average depth of field c of pixel.
Second control submodule 5037 detects fringe region in the preview image for controlling the second camera The field depth of middle all pixels point.
Second extracting sub-module 5038, for successively extracting institute in the target photometry region and the fringe region Stating the second control submodule 5037 and controlling the field depth that the second camera detects is [c-w1,c+w1]、[c-w2,c- w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] image-region.
Second adjustment submodule 5039 extracts the obtained scape for adjusting separately second extracting sub-module 5038 Deep range is [c-w1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c- wn)∪(c+wn,c+wm] image-region light metering weight be m1%, m2%, m3% ... mn%.
Then the survey optical module 504 includes:Third surveys photonic module 5043.
Third surveys photonic module 5043, for being [c-w according to the field depth1,c+w1]、[c-w2,c-w1)∪(c+ w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the corresponding survey of image-region Light weight m1%, m2%, m3% ... mn% is [c-w to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+ w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] each image-region carry out survey light;Its In, 0≤w1<w2<w3……<wn<wm, mn<……<m3<m2<m1, m1%+m2%+m3%+ ...+mn%=100%, the side Edge region is:All image-regions in addition to target photometry region.
Wherein, the adjustment module 503 includes:Second computational submodule 50310, third determine submodule 50311 and Four determine submodule 50312.
Second computational submodule 50310 calculates institute in the target photometry region for being based on the target field depth There is the average depth of field d of pixel.
Third determines submodule 50311, the averagely depth of field calculated for determining second computational submodule 50310 Target value range locating for d.
4th determines that submodule 50312 determines institute for the corresponding relationship according to preset value range and light metering weight It states third and determines the corresponding light metering weight s% of the target value range that submodule 50311 determines.
Then the survey optical module 504 includes:5th determines that submodule 5044 and the 4th surveys photonic module 5045.
5th determines submodule 5044, for determining that the light metering weight of the fringe region in the preview image is t%.
4th surveys photonic module 5045, for determining what submodule 5044 determined according to the described 5th of center photometry region the The light metering weight t% of light metering weight s% and the fringe region survey the center photometry region and the fringe region Light;Wherein, the center photometry region is the target photometry region, and the fringe region is:In addition to target photometry region All image-regions, the average depth of field d value is bigger, and s% value is also bigger, s+t=100.
Wherein, the first acquisition module 501 includes:Detection sub-module 5011, acquisition submodule 5012 and the 6th determine Submodule 5013.
Detection sub-module 5011, for detecting mobile terminal user in the clicking operation in preview interface of taking pictures.
Acquisition submodule 5012 is obtained for when detecting clicking operation, obtaining the detection of detection sub-module 5011 Clicking operation position.
6th determines submodule 5013, the institute for will obtain in the preview image including the acquisition submodule 5012 The region for stating the preset first range of the position of clicking operation is determined as the target photometry region.
Wherein, the first acquisition module 501 includes:7th determines submodule 5014.
7th determines submodule 5014, described for the region of the preset second range in the preview image to be determined as Target photometry region.
Wherein, the first acquisition module 501 includes:Detection sub-module 5015 and the 8th determines submodule 5016.
Detection sub-module 5015, for carrying out Face datection to the preview image.
8th determines submodule 5016, for the region where the face being determined as described when detecting face Target photometry region.
Mobile terminal in the embodiment of the present invention is exported by acquiring preview image in the first camera to preview circle of taking pictures During face, the target photometry region of the preview image is obtained, and obtains the target photometry region that second camera detects The target field depth at place is based on the target field depth, adjusts the light metering weight of preview image, final to realize based on adjustment Light metering weight afterwards carries out survey light to preview image, which obtains preview image by the cooperation between two cameras And the target field depth at the place of target photometry region is come pair based on the field depth where target photometry region in image The light metering weight of whole preview image is adjusted, and the survey light plan of entire scope is determined with the field depth information of subrange Slightly, it realizes differentiated there are the reference object of difference, with the progress of differentiated light metering weight to preview image spatial location Light operation is surveyed, guarantees the accuracy for surveying light, is promoted and survey light effect.
Sixth embodiment
As shown in fig. 7, the mobile terminal 600 includes:At least one processor 601, memory 602, at least one network Interface 604 and user interface 603 and camera, the camera include the first camera 606 and second camera 607.It is mobile Various components in terminal 600 are coupled by bus system 605.It is understood that bus system 605 is for realizing these groups Connection communication between part.Bus system 605 further includes power bus, control bus and state in addition to including data/address bus Signal bus.But for the sake of clear explanation, various buses are all designated as bus system 605 in Fig. 7.
Wherein, user interface 603 may include display, keyboard or pointing device (for example, mouse, trace ball (trackball), touch-sensitive plate or touch screen etc..
It is appreciated that the memory 602 in the embodiment of the present invention can be volatile memory or nonvolatile memory, It or may include both volatile and non-volatile memories.Wherein, nonvolatile memory can be read-only memory (Read- Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), the read-only storage of erasable programmable Device (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM, EEPROM) or Flash memory.Volatile memory can be random access memory (Random Access Memory, RAM), be used as external high Speed caching.By exemplary but be not restricted explanation, the RAM of many forms is available, such as static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data Rate SDRAM, DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links Dynamic random access memory (Synch Link DRAM, SLDRAM) and direct rambus random access memory (Direct Rambus RAM, DRRAM).The memory 602 of the system and method for description of the embodiment of the present invention is intended to include but is not limited to these With the memory of any other suitable type.
In some embodiments, memory 602 stores following element, executable modules or data structures, or Their subset of person or their superset:Operating system 6021 and application program 6022.
Wherein, operating system 6021 include various system programs, such as ccf layer, core library layer, driving layer etc., are used for Realize various basic businesses and the hardware based task of processing.Application program 6022 includes various application programs, such as media Player (Media Player), browser (Browser) etc., for realizing various applied business.Realize the embodiment of the present invention The program of method may be embodied in application program 6022.
In embodiments of the present invention, by the program or instruction of calling memory 602 to store, specifically, can be application The program or instruction stored in program 6022, processor 601 are used to export in first camera acquisition preview image to bat During according to preview interface, the target photometry region of the preview image is obtained;Obtain what the second camera detected Target field depth where the target photometry region;Based on the target field depth, the survey of the preview image is adjusted Light weight;Based on the light metering weight adjusted, survey light is carried out to the preview image.
The method that the embodiments of the present invention disclose can be applied in processor 601, or be realized by processor 601. Processor 601 may be a kind of IC chip, the processing capacity with signal.During realization, the above method it is each Step can be completed by the integrated logic circuit of the hardware in processor 601 or the instruction of software form.Above-mentioned processing Device 601 can be general processor, digital signal processor (Digital Signal Processor, DSP), dedicated integrated electricity Road (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present invention.It is general Processor can be microprocessor or the processor is also possible to any conventional processor etc..In conjunction with institute of the embodiment of the present invention The step of disclosed method, can be embodied directly in hardware decoding processor and execute completion, or with the hardware in decoding processor And software module combination executes completion.Software module can be located at random access memory, and flash memory, read-only memory may be programmed read-only In the storage medium of this fields such as memory or electrically erasable programmable memory, register maturation.The storage medium is located at The step of memory 602, processor 601 reads the information in memory 602, completes the above method in conjunction with its hardware.
It is understood that the embodiment of the present invention description these embodiments can with hardware, software, firmware, middleware, Microcode or combinations thereof is realized.For hardware realization, processing unit be may be implemented in one or more specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processing, DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (Programmable Logic Device, PLD), field programmable gate array (Field-Programmable Gate Array, FPGA), general place It manages in device, controller, microcontroller, microprocessor, other electronic units for executing herein described function or combinations thereof.
For software implementations, can by execute the embodiment of the present invention described in function module (such as process, function etc.) come Realize technology described in the embodiment of the present invention.Software code is storable in memory and is executed by processor.Memory can With portion realizes in the processor or outside the processor.
Optionally, processor 601 is also used to:Obtain the overall distance shown between object and the reference plane in preview interface Parameter information;First area is chosen in the preview interface;According to the overall distance parameter information, the display pair is determined As the distance information value in the first area.
As another embodiment, the mobile terminal includes that at least two cameras, processor 601 is also used to:Control The second camera detects the field depth of all pixels point in the fringe region in the preview image;Edge region In, extract at least one object edge region that field depth is the target field depth;By the target photometry region and At least one described object edge region determines to be center photometry region;The center photometry region will be removed in the preview image Outer all image-regions are determined as assisting photometry region;Adjust the survey of the center photometry region and the auxiliary photometry region Light weight;Wherein, the fringe region is:All image-regions in addition to target photometry region.
Optionally, as another embodiment, processor 601 is also used to:Adjust the survey light power of the center photometry region Weight is 100%;The light metering weight for adjusting the auxiliary photometry region is 0;According to the light metering weight of the center photometry region 100%, survey light is carried out to the center photometry region of the preview image.
Optionally, as another embodiment, processor 601 is also used to:Adjust the survey light power of the center photometry region Weight is a%;The light metering weight for adjusting the auxiliary photometry region is b%;According to the light metering weight a% of the center photometry region With the light metering weight b% of the auxiliary photometry region, survey light is carried out to the center photometry region and the auxiliary photometry region; Wherein, 50<a<100, a+b=100.
Optionally, as another embodiment, processor 601 is also used to:Based on the target field depth, described in calculating The average depth of field c of all pixels point in target photometry region;It controls the second camera and detects side in the preview image The field depth of all pixels point in edge region;In the target photometry region and the fringe region, the depth of field is successively extracted Range is [c-w1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c- wn)∪(c+wn,c+wm] image-region;Adjusting separately the field depth is [c-w1,c+w1]、[c-w2,c-w1)∪(c+w1, c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the light metering weight of image-region be m1%, m2%, m3% ... mn%;It is [c-w according to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、 [c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the corresponding light metering weight of image-region m1%, m2%, m3% ... mn% is [c-w to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c- w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] each image-region carry out survey light;Wherein, 0≤w1 <w2<w3……<wn<wm, mn<……<m3<m2<m1, m1%+m2%+m3%+ ...+mn%=100%, the fringe region are: All image-regions in addition to target photometry region.
Optionally, as another embodiment, processor 601 is also used to:Based on the target field depth, described in calculating The average depth of field d of all pixels point in target photometry region;Determine target value range locating for the average depth of field d;According to The corresponding relationship of preset value range and light metering weight determines the corresponding light metering weight s% of the target value range;It determines The light metering weight of fringe region in the preview image is t%;Light metering weight s% and the side according to center photometry region The light metering weight t% in edge region carries out survey light to the center photometry region and the fringe region;Wherein, the center is surveyed Light region is the target photometry region, and the fringe region is:All image-regions in addition to target photometry region are described flat Equal depth of field d value is bigger, and s% value is also bigger, s+t=100.
Optionally, as another embodiment, processor 601 is also used to:Mobile terminal user is detected in preview circle of taking pictures Clicking operation on face;When detecting clicking operation, the position of clicking operation is obtained;It will include described in the preview image The region of the preset first range of the position of clicking operation is determined as the target photometry region.
Optionally, as another embodiment, processor 601 is also used to:By default second model in the preview image The region enclosed is determined as the target photometry region.
Optionally, as another embodiment, processor 601 is also used to:Face datection is carried out to the preview image;When When detecting face, the region where the face is determined as the target photometry region.
The mobile terminal can be realized each process that terminal is realized in previous embodiment, to avoid repeating, here no longer It repeats.
Mobile terminal in the embodiment of the present invention is exported by acquiring preview image in the first camera to preview circle of taking pictures During face, the target photometry region of the preview image is obtained, and obtains the target photometry region that second camera detects The target field depth at place is based on the target field depth, adjusts the light metering weight of preview image, final to realize based on adjustment Light metering weight afterwards carries out survey light to preview image, which obtains preview image by the cooperation between two cameras And the target field depth at the place of target photometry region is come pair based on the field depth where target photometry region in image The light metering weight of whole preview image is adjusted, and the survey light plan of entire scope is determined with the field depth information of subrange Slightly, it realizes differentiated there are the reference object of difference, with the progress of differentiated light metering weight to preview image spatial location Light operation is surveyed, guarantees the accuracy for surveying light, is promoted and survey light effect.
7th embodiment
As shown in figure 8, the mobile terminal 700 can be mobile phone, tablet computer, personal digital assistant (Personal Digital Assistant, PDA) or vehicle-mounted computer etc..
Mobile terminal 700 in Fig. 8 includes radio frequency (Radio Frequency, RF) circuit 710, memory 720, input Unit 730, display unit 740, processor 760, voicefrequency circuit 770, WiFi (Wireless Fidelity) module 780 and electricity Source 790 and camera, the camera include the first camera 751 and second camera 752.
Wherein, input unit 730 can be used for receiving the number or character information of user's input, and generation and mobile terminal The related signal input of 700 user setting and function control.Specifically, in the embodiment of the present invention, which can To include touch panel 731.Touch panel 731, also referred to as touch screen collect the touch operation of user on it or nearby (for example user uses the operations of any suitable object or attachment on touch panel 731 such as finger, stylus), and according to preparatory The formula of setting drives corresponding attachment device.Optionally, touch panel 731 may include touch detecting apparatus and touch controller Two parts.Wherein, the touch orientation of touch detecting apparatus detection user, and touch operation bring signal is detected, by signal Send touch controller to;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, The processor 760 is given again, and can be received order that processor 760 is sent and be executed.Furthermore, it is possible to using resistance-type, The multiple types such as condenser type, infrared ray and surface acoustic wave realize touch panel 731.In addition to touch panel 731, input unit 730 can also include other input equipments 732, other input equipments 732 can include but is not limited to physical keyboard, function key One of (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc. are a variety of.
Wherein, display unit 740 can be used for showing information input by user or be supplied to the information and movement of user The various menu interfaces of terminal 700.Display unit 740 may include display panel 741, optionally, can use LCD or organic hair The forms such as optical diode (Organic Light-Emitting Diode, OLED) configure display panel 741.
It should be noted that touch panel 731 can cover display panel 741, touch display screen is formed, when the touch display screen is examined After measuring touch operation on it or nearby, processor 760 is sent to determine the type of touch event, is followed by subsequent processing device 760 provide corresponding visual output according to the type of touch event in touch display screen.
Touch display screen includes Application Program Interface viewing area and common control viewing area.The Application Program Interface viewing area And arrangement mode of the common control viewing area does not limit, can be arranged above and below, left-right situs etc. can distinguish two it is aobvious Show the arrangement mode in area.The Application Program Interface viewing area is displayed for the interface of application program.Each interface can be with The interface elements such as the icon comprising at least one application program and/or widget desktop control.The Application Program Interface viewing area Or the empty interface not comprising any content.This commonly uses control viewing area for showing the higher control of utilization rate, for example, Application icons such as button, interface number, scroll bar, phone directory icon etc. are set.
Wherein processor 760 is the control centre of mobile terminal 700, utilizes various interfaces and connection whole mobile phone Various pieces, by running or executing the software program and/or module that are stored in first memory 721, and calling storage Data in second memory 722 execute the various functions and processing data of mobile terminal 700, thus to mobile terminal 700 Carry out integral monitoring.Optionally, processor 760 may include one or more processing units.
In embodiments of the present invention, by call store the first memory 721 in software program and/or module and/ Or the data in the second memory 722, processor 760 are used to export in first camera acquisition preview image to taking pictures During preview interface, the target photometry region of the preview image is obtained;Obtain the institute that the second camera detects State the target field depth where target photometry region;Based on the target field depth, the survey light of the preview image is adjusted Weight;Based on the light metering weight adjusted, survey light is carried out to the preview image.
Optionally, as another embodiment, processor 760 is also used to control the second camera and detects the preview graph The field depth of all pixels point in fringe region as in;In the edge region, extracting field depth is the target depth of field At least one object edge region of range;The target photometry region and at least one described object edge region are determined as Center photometry region;All image-regions in the preview image in addition to the center photometry region are determined as auxiliary and survey light Region;Adjust the light metering weight of the center photometry region and the auxiliary photometry region;Wherein, the fringe region is:It removes All image-regions outside target photometry region.
Optionally, as another embodiment, the light metering weight that processor 760 is also used to adjust the center photometry region is 100%;The light metering weight for adjusting the auxiliary photometry region is 0;According to the light metering weight 100% of the center photometry region, Survey light is carried out to the center photometry region of the preview image.
Optionally, as another embodiment, the light metering weight that processor 760 is also used to adjust the center photometry region is A%;The light metering weight for adjusting the auxiliary photometry region is b%;Light metering weight a% and institute according to the center photometry region The light metering weight b% for stating auxiliary photometry region, carries out survey light to the center photometry region and the auxiliary photometry region;Its In, 50<a<100, a+b=100.
Optionally, as another embodiment, processor 760 is also used to calculate the mesh based on the target field depth Mark the average depth of field c of all pixels point in photometry region;It controls the second camera and detects edge in the preview image The field depth of all pixels point in region;In the target photometry region and the fringe region, depth of field model is successively extracted It encloses for [c-w1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn) ∪(c+wn,c+wm] image-region;Adjusting separately the field depth is [c-w1,c+w1]、[c-w2,c-w1)∪(c+w1,c+ w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the light metering weight of image-region be m1%, m2%, m3% ... mn%;It is [c-w according to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、 [c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the corresponding light metering weight of image-region m1%, m2%, m3% ... mn% is [c-w to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c- w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] each image-region carry out survey light;Wherein, 0≤w1 <w2<w3……<wn<wm, mn<……<m3<m2<m1, m1%+m2%+m3%+ ...+mn%=100%, the fringe region are: All image-regions in addition to target photometry region.
Optionally, as another embodiment, processor 760 is also used to calculate the mesh based on the target field depth Mark the average depth of field d of all pixels point in photometry region;Determine target value range locating for the average depth of field d;According to pre- If value range and light metering weight corresponding relationship, determine the corresponding light metering weight s% of the target value range;Determine institute The light metering weight for stating the fringe region in preview image is t%;Light metering weight s% and the edge according to center photometry region The light metering weight t% in region carries out survey light to the center photometry region and the fringe region;Wherein, light is surveyed at the center Region is the target photometry region, and the fringe region is:All image-regions in addition to target photometry region, it is described average Depth of field d value is bigger, and s% value is also bigger, s+t=100.
Optionally, as another embodiment, processor 760 is also used to detect mobile terminal user in preview interface of taking pictures On clicking operation;When detecting clicking operation, the position of clicking operation is obtained;It will include the point in the preview image The region for the preset first range for hitting the position of operation is determined as the target photometry region.
Optionally, as another embodiment, processor 760 is also used to the preset second range in the preview image Region be determined as the target photometry region.
Optionally, as another embodiment, processor 760 is also used to carry out Face datection to the preview image;When When detecting face, the region where the face is determined as the target photometry region.
The mobile terminal can be realized each process that terminal is realized in previous embodiment, to avoid repeating, here no longer It repeats.
Mobile terminal in the embodiment of the present invention is exported by acquiring preview image in the first camera to preview circle of taking pictures During face, the target photometry region of the preview image is obtained, and obtains the target photometry region that second camera detects The target field depth at place is based on the target field depth, adjusts the light metering weight of preview image, final to realize based on adjustment Light metering weight afterwards carries out survey light to preview image, which obtains preview image by the cooperation between two cameras And the target field depth at the place of target photometry region is come pair based on the field depth where target photometry region in image The light metering weight of whole preview image is adjusted, and the survey light plan of entire scope is determined with the field depth information of subrange Slightly, it realizes differentiated there are the reference object of difference, with the progress of differentiated light metering weight to preview image spatial location Light operation is surveyed, guarantees the accuracy for surveying light, is promoted and survey light effect.
Those of ordinary skill in the art may be aware that the embodiment in conjunction with disclosed in the embodiment of the present invention describe it is each Exemplary unit and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These Function is implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Profession Technical staff can use different methods to achieve the described function each specific application, but this realization is not answered Think beyond the scope of this invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In embodiment provided herein, it should be understood that disclosed device and method can pass through others Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of device or unit It connects, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention. And storage medium above-mentioned includes:USB flash disk, mobile hard disk, ROM, RAM, magnetic or disk etc. are various to can store program code Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
Although the preferred embodiment of the embodiment of the present invention has been described, once a person skilled in the art knows bases This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as Including preferred embodiment and fall into all change and modification of range of embodiment of the invention.
Finally, it is to be noted that, in embodiments of the present invention, relational terms such as first and second and the like are only Only it is used to distinguish one entity or operation from another entity or operation, without necessarily requiring or implying these realities There are any actual relationship or orders between body or operation.Moreover, the terms "include", "comprise" or its it is any its He is intended to non-exclusive inclusion by variant, so that process, method, article or terminal including a series of elements are set Standby includes not only those elements, but also including other elements that are not explicitly listed, or further includes for this process, side Method, article or the intrinsic element of terminal device.In the absence of more restrictions, being limited by sentence "including a ..." Fixed element, it is not excluded that including that there is also other identical in the process, method of the element, article or terminal device Element.
Above-described is the preferred embodiment of the present invention, it should be pointed out that the ordinary person of the art is come It says, can also make several improvements and retouch under the premise of not departing from principle of the present invention, these improvements and modifications also exist In protection scope of the present invention.

Claims (16)

1. a kind of light measuring method, applied to the mobile terminal including the first camera and second camera, which is characterized in that packet It includes:
First camera acquisition preview image export to take pictures preview interface during, obtain the preview image Target photometry region;
Obtain the target field depth where the target photometry region that the second camera detects;
Based on the target field depth, the light metering weight of the preview image is adjusted;
Based on the light metering weight adjusted, survey light is carried out to the preview image;
Wherein, reference object shown in the target photometry region is one of reference object shown in preview image Subregion, the field depth where a part of region are the target field depth;
It is described to be based on the target field depth, the step of adjusting the light metering weight of the preview image, including:
Based on the target field depth, the average depth of field c of all pixels point in the target photometry region is calculated;
Control the field depth that the second camera detects all pixels point in fringe region in the preview image;
In the target photometry region and the fringe region, successively extracting field depth is [c-w1,c+w1]、[c-w2,c- w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] image-region;
Adjusting separately the field depth is [c-w1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+ w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] image-region light metering weight be m1%, m2%, m3% ... mn%;
It is then described to be based on the light metering weight adjusted, survey to the preview image step of light, including:
It is [c-w according to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+ w3]……[c-wm,c-wn)∪(c+wn,c+wm] the corresponding light metering weight m of image-region1%, m2%, m3% ... mn% is right The field depth is [c-w1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]…… [c-wm,c-wn)∪(c+wn,c+wm] each image-region carry out survey light;
Wherein, 0≤w1<w2<w3……<wn<wm, mn<……<m3<m2<m1, m1%+m2%+m3%+ ...+mn%=100%, institute Stating fringe region is:All image-regions in addition to target photometry region.
2. being adjusted described pre- the method according to claim 1, wherein described be based on the target field depth Look at image light metering weight the step of, including:
Control the field depth that the second camera detects all pixels point in fringe region in the preview image;
In the edge region, at least one object edge region that field depth is the target field depth is extracted;
The target photometry region and at least one described object edge region are determined as center photometry region;
All image-regions in the preview image in addition to the center photometry region are determined as to assist photometry region;
Adjust the light metering weight of the center photometry region and the auxiliary photometry region;
Wherein, the fringe region is:All image-regions in addition to target photometry region.
3. according to the method described in claim 2, it is characterized in that, the adjustment center photometry region and the auxiliary are surveyed The step of light metering weight in light region, including:
The light metering weight for adjusting the center photometry region is 100%;
The light metering weight for adjusting the auxiliary photometry region is 0;
It is then described to be based on the light metering weight adjusted, survey to the preview image step of light, including:
According to the light metering weight 100% of the center photometry region, the center photometry region of the preview image is carried out Survey light.
4. according to the method described in claim 2, it is characterized in that, the adjustment center photometry region and the auxiliary are surveyed The step of light metering weight in light region, including:
The light metering weight for adjusting the center photometry region is a%;
The light metering weight for adjusting the auxiliary photometry region is b%;
It is then described to be based on the light metering weight adjusted, survey to the preview image step of light, including:
According to the light metering weight a% of the center photometry region and light metering weight b% of the auxiliary photometry region, in described Heart photometry region and the auxiliary photometry region carry out survey light;
Wherein, 50<a<100, a+b=100.
5. being adjusted described pre- the method according to claim 1, wherein described be based on the target field depth Look at image light metering weight the step of, including:
Based on the target field depth, the average depth of field d of all pixels point in the target photometry region is calculated;
Determine target value range locating for the average depth of field d;
According to the corresponding relationship of preset value range and light metering weight, the corresponding light metering weight of the target value range is determined S%;
It is then described to be based on the light metering weight adjusted, survey to the preview image step of light, including:
The light metering weight for determining the fringe region in the preview image is t%;
According to the light metering weight s% of the center photometry region and light metering weight t% of the fringe region, to the area the center Ce Guang Domain and the fringe region carry out survey light;
Wherein, the center photometry region is the target photometry region, and the fringe region is:In addition to target photometry region All image-regions, the average depth of field d value is bigger, and s% value is also bigger, s+t=100.
6. the method according to claim 1, wherein the target photometry region for obtaining the preview image Step, including:
Mobile terminal user is detected in the clicking operation in preview interface of taking pictures;
When detecting clicking operation, the position of clicking operation is obtained;
The region of the preset first range of position in the preview image including the clicking operation is determined as the target Photometry region.
7. the method according to claim 1, wherein the target photometry region for obtaining the preview image Step, including:
The region of preset second range in the preview image is determined as the target photometry region.
8. the method according to claim 1, wherein the target photometry region for obtaining the preview image Step, including:
Face datection is carried out to the preview image;
When detecting face, the region where the face is determined as the target photometry region.
9. a kind of mobile terminal, including the first camera and second camera, which is characterized in that further include:
First obtain module, for first camera acquisition preview image export to take pictures preview interface during, Obtain the target photometry region of the preview image;
Second obtains module, obtains the target that module obtains for obtaining the second camera detects described first Target field depth where photometry region;
Module is adjusted, for obtaining the target field depth that module obtains based on described second, adjusts the preview image Light metering weight;
Optical module is surveyed, for being based on the adjustment module light metering weight adjusted, survey light is carried out to the preview image;
Wherein, reference object shown in the target photometry region is one of reference object shown in preview image Subregion, the field depth where a part of region are the target field depth;
The adjustment module includes:
First computational submodule calculates all pixels point in the target photometry region for being based on the target field depth Average depth of field c;
Second control submodule detects all pictures in fringe region in the preview image for controlling the second camera The field depth of vegetarian refreshments;
Second extracting sub-module, in the target photometry region and the fringe region, successively extracting second control It is [c-w that system module, which controls the field depth that the second camera detects,1,c+w1]、[c-w2,c-w1)∪(c+w1,c+ w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] image-region;
Second adjustment submodule is [c- for adjusting separately the field depth that second extracting sub-module is extracted w1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn, c+wm] image-region light metering weight be m1%, m2%, m3% ... mn%;
Then the survey optical module includes:
Third surveys photonic module, for being [c-w according to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、 [c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the corresponding light metering weight of image-region m1%, m2%, m3% ... mn% is [c-w to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c- w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] each image-region carry out survey light;
Wherein, 0≤w1<w2<w3……<wn<wm, mn<……<m3<m2<m1, m1%+m2%+m3%+ ...+mn%=100%, institute Stating fringe region is:All image-regions in addition to target photometry region.
10. mobile terminal according to claim 9, which is characterized in that the adjustment module includes:
First control submodule detects all pictures in fringe region in the preview image for controlling the second camera The field depth of vegetarian refreshments;
First extracting sub-module controls the second camera in the edge region, extracting first control submodule Detect at least one object edge region that obtained field depth is the target field depth;
First determines submodule, for will the target photometry region and first extracting sub-module extract described at least one A object edge region determines to be center photometry region;
Second determines submodule, determines that light is surveyed at the center that submodule determines for will remove described first in the preview image All image-regions outside region are determined as assisting photometry region;
The first adjustment submodule determines the determining center photometry region and described second of submodule for adjusting described first Determine the light metering weight for the auxiliary photometry region that submodule determines;
Wherein, the fringe region is:All image-regions in addition to target photometry region.
11. mobile terminal according to claim 10, which is characterized in that the first adjustment submodule includes:
The first adjustment unit, the light metering weight for adjusting the center photometry region are 100%;
Second adjustment unit, the light metering weight for adjusting the auxiliary photometry region is 0;
Then the survey optical module includes:
First surveys photonic module, for the light metering weight 100% according to the center photometry region, to the institute of the preview image The center photometry region of stating carries out survey light.
12. mobile terminal according to claim 10, which is characterized in that the first adjustment submodule includes:
Third adjustment unit, the light metering weight for adjusting the center photometry region are a%;
4th adjustment unit, the light metering weight for adjusting the auxiliary photometry region is b%;
Then the survey optical module includes:
Second surveys photonic module, for the light metering weight a% and the auxiliary photometry region according to the center photometry region Light metering weight b% carries out survey light to the center photometry region and the auxiliary photometry region;
Wherein, 50<a<100, a+b=100.
13. mobile terminal according to claim 9, which is characterized in that the adjustment module includes:
Second computational submodule calculates all pixels point in the target photometry region for being based on the target field depth Average depth of field d;
Third determines submodule, for determining that target locating for the average depth of field d of the second computational submodule calculating takes It is worth range;
4th determines that submodule determines that the third is true for the corresponding relationship according to preset value range and light metering weight The corresponding light metering weight s% of the target value range that stator modules determine;
Then the survey optical module includes:
5th determines submodule, for determining that the light metering weight of the fringe region in the preview image is t%;
4th surveys photonic module, for determining the determining light metering weight s% of submodule according to the described 5th of center photometry region the With the light metering weight t% of the fringe region, survey light is carried out to the center photometry region and the fringe region;
Wherein, the center photometry region is the target photometry region, and the fringe region is:In addition to target photometry region All image-regions, the average depth of field d value is bigger, and s% value is also bigger, s+t=100.
14. mobile terminal according to claim 9 is it is characterized in that, the first acquisition module includes:
Detection sub-module, for detecting mobile terminal user in the clicking operation in preview interface of taking pictures;
Acquisition submodule, for when detecting clicking operation, obtaining clicking operation that the detection sub-module detects Position;
6th determines submodule, the clicking operation for will obtain in the preview image including the acquisition submodule The region of the preset first range of position is determined as the target photometry region.
15. mobile terminal according to claim 9, which is characterized in that described first, which obtains module, includes:
7th determines submodule, surveys light for the region of the preset second range in the preview image to be determined as the target Region.
16. mobile terminal according to claim 9, which is characterized in that described first, which obtains module, includes:
Detection sub-module, for carrying out Face datection to the preview image;
8th determines submodule, for the region where the face being determined as the target and surveys light when detecting face Region.
CN201611151717.1A 2016-12-14 2016-12-14 A kind of light measuring method and mobile terminal Active CN106791809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611151717.1A CN106791809B (en) 2016-12-14 2016-12-14 A kind of light measuring method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611151717.1A CN106791809B (en) 2016-12-14 2016-12-14 A kind of light measuring method and mobile terminal

Publications (2)

Publication Number Publication Date
CN106791809A CN106791809A (en) 2017-05-31
CN106791809B true CN106791809B (en) 2018-11-30

Family

ID=58887918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611151717.1A Active CN106791809B (en) 2016-12-14 2016-12-14 A kind of light measuring method and mobile terminal

Country Status (1)

Country Link
CN (1) CN106791809B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108955518B (en) * 2017-05-18 2020-06-16 武汉斗鱼网络科技有限公司 Light metering processing method, system and equipment based on image recognition
CN108307123B (en) * 2018-01-22 2021-10-26 维沃移动通信有限公司 Exposure adjusting method and mobile terminal
CN108881737A (en) * 2018-07-06 2018-11-23 漳州高新区远见产业技术研究有限公司 A kind of VR imaging method applied to mobile terminal
CN111343388B (en) * 2019-04-11 2021-09-24 杭州海康慧影科技有限公司 Method and device for determining exposure time
CN111083386B (en) * 2019-12-24 2021-01-22 维沃移动通信有限公司 Image processing method and electronic device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101247479B (en) * 2008-03-26 2010-07-07 北京中星微电子有限公司 Automatic exposure method based on objective area in image
KR101305694B1 (en) * 2011-10-20 2013-09-09 엘지이노텍 주식회사 Method of image processing for detecting object, device, method for user interface and user interface thereof
CN104333710A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Camera exposure method, camera exposure device and camera exposure equipment

Also Published As

Publication number Publication date
CN106791809A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106791809B (en) A kind of light measuring method and mobile terminal
CN105898143B (en) A kind of grasp shoot method and mobile terminal of moving object
CN106331510B (en) A kind of backlight photographic method and mobile terminal
CN106131449B (en) A kind of photographic method and mobile terminal
CN105872148B (en) A kind of generation method and mobile terminal of high dynamic range images
CN105227945B (en) Automatic white balance control method and mobile terminal
CN105827990B (en) A kind of automatic explosion method and mobile terminal
CN105227858B (en) A kind of image processing method and mobile terminal
CN105847674B (en) A kind of preview image processing method and mobile terminal based on mobile terminal
CN106027907B (en) A kind of method and mobile terminal of adjust automatically camera
CN107507239B (en) A kind of image partition method and mobile terminal
CN106254682B (en) A kind of photographic method and mobile terminal
CN107172361B (en) A kind of method and mobile terminal of pan-shot
CN107231530B (en) A kind of photographic method and mobile terminal
CN106231178B (en) A kind of self-timer method and mobile terminal
CN107920211A (en) A kind of photographic method, terminal and computer-readable recording medium
CN106060419B (en) A kind of photographic method and mobile terminal
CN107395998A (en) A kind of image capturing method and mobile terminal
CN107197170A (en) A kind of exposal control method and mobile terminal
CN106791375B (en) A kind of shooting focusing method and mobile terminal
CN105227857B (en) A kind of method and apparatus of automatic exposure
CN106506962A (en) A kind of image processing method and mobile terminal
CN106777329B (en) A kind of processing method and mobile terminal of image information
CN107222737B (en) A kind of processing method and mobile terminal of depth image data
CN107147837A (en) The method to set up and mobile terminal of a kind of acquisition parameters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20171107

Address after: 283 No. 523860 Guangdong province Dongguan city Changan town usha BBK Avenue

Applicant after: VIVO MOBILE COMMUNICATION CO., LTD.

Applicant after: Wewo Mobile Communication Co. Ltd. Beijing branch

Address before: 283 No. 523860 Guangdong province Dongguan city Changan town usha BBK Avenue

Applicant before: VIVO MOBILE COMMUNICATION CO., LTD.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200515

Address after: 283 No. 523860 Guangdong province Dongguan city Changan town usha BBK Avenue

Patentee after: VIVO MOBILE COMMUNICATION Co.,Ltd.

Address before: 283 No. 523860 Guangdong province Dongguan city Changan town usha BBK Avenue

Co-patentee before: Wewo Mobile Communication Co. Ltd. Beijing branch

Patentee before: VIVO MOBILE COMMUNICATION Co.,Ltd.

TR01 Transfer of patent right