WO2020011112A1 - 图像处理方法、系统、可读存储介质及终端 - Google Patents
图像处理方法、系统、可读存储介质及终端 Download PDFInfo
- Publication number
- WO2020011112A1 WO2020011112A1 PCT/CN2019/094934 CN2019094934W WO2020011112A1 WO 2020011112 A1 WO2020011112 A1 WO 2020011112A1 CN 2019094934 W CN2019094934 W CN 2019094934W WO 2020011112 A1 WO2020011112 A1 WO 2020011112A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- camera
- cameras
- color
- images
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 44
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 50
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 49
- 238000000034 method Methods 0.000 claims abstract description 16
- 230000009467 reduction Effects 0.000 claims description 32
- 230000002159 abnormal effect Effects 0.000 claims description 18
- 230000002194 synthesizing effect Effects 0.000 claims description 18
- 239000002131 composite material Substances 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 8
- 239000000203 mixture Substances 0.000 claims description 7
- 230000005856 abnormality Effects 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 abstract description 4
- 230000008569 process Effects 0.000 abstract description 2
- 230000009977 dual effect Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
Definitions
- the present invention relates to the technical field of multi-camera imaging, and in particular, to an image processing method, system, readable storage medium, and terminal.
- terminals are constantly popularizing, such as mobile phones, tablets, computers, and cameras.
- terminals are constantly popularizing, such as mobile phones, tablets, computers, and cameras.
- almost all smart devices have integrated camera lenses to achieve the camera's camera functions.
- Dual cameras can obtain the depth of field of images, achieve background blur, or support Optical zoom, color + black and white noise reduction and other effects.
- Current terminals are moving towards this multi-camera trend.
- the current dual cameras can obtain the depth of field of the image, they cannot be calibrated, resulting in a poor background blur effect. For example, objects such as faces and hair at the focus distance are incorrectly blurred.
- the dual cameras cannot support both optical zoom and image noise reduction, resulting in low noise and high definition images.
- the object of the present invention is to provide an image processing method, system, readable storage medium, and terminal to solve the technical problem that the existing dual cameras cannot obtain low noise and high definition images.
- An image processing method is applied to a terminal.
- the terminal is provided with a main camera and at least two sub cameras. All cameras are connected through a synchronization signal line, and the main camera is connected to each of the cameras.
- the sub-cameras have a common shooting area.
- the method includes: acquiring a main-photograph image taken by the main camera, and separately acquiring a sub-photograph image synchronized by each of the sub-cameras; sequentially each of the sub-cameras An image is synthesized with one of the main images, and each of the images is selected as a reference image according to a first preset rule to synthesize multiple optimized images at a time. According to the second preset rule, Image synthesis is performed on a plurality of the primary optimized images to synthesize a secondary optimized image.
- an image processing method may further have the following additional technical features:
- the one-time optimized image is a depth image.
- the step of performing image synthesis on a plurality of the one-time optimized images includes: selecting one of the one-time optimized images as a first reference image according to the second preset rule; obtaining the first reference A target image region having abnormal depth information in the image; obtaining target depth information of the same region as the target image region from other one-time optimized images, and using the target depth information to correct the depth information of the target image region.
- the preset rule includes any one of the following rules: a rule that preferentially selects an image with the least abnormal depth information; a rule that preferentially selects an image with the largest field of view; a rule that preferentially selects an image with the smallest field of view; priority Select a composite image of images taken by a specific two cameras; or a rule that preferentially selects the image with the highest image quality.
- the main camera is a first color camera
- all the sub cameras include at least a black and white camera and a second color camera, and there is a difference in the equivalent focal lengths of the first color camera and the second color camera. .
- the step of synthesizing the first optimized image by the images captured by the first color camera and the black and white camera includes: according to the first preset rule, a first color image captured from the first color camera And one of the second luminance signal images captured by the black and white camera is selected as a reference image; the first color image is split into a first luminance signal image and a first chrominance signal image; based on the selected A reference image, performing noise reduction synthesis on the first luminance signal image and the second luminance signal image to obtain a third luminance signal image after noise reduction; combining the first chrominance signal image and the third luminance signal image
- the brightness signal image is synthesized to obtain the once optimized image after noise reduction.
- the step of synthesizing the first optimized image by the images captured by the first color camera and the second color camera includes: according to the first preset rule, a first image captured by the first color camera A color image, and a second image selected from the second color camera as a reference image; determining the first color according to the focal lengths of the first color camera and the second color camera The subject and background relationship between the image and the second color image; and selectively performing a virtual correction on the first color image and the second color image based on the selected reference image and the determined subject and background relationship Chemical synthesis to obtain a clear one-time optimized image.
- the arrangement manner of the cameras on the terminal is any one of the following situations: when two secondary cameras are provided on the terminal, the connection between the main camera and the two secondary cameras When the three sub cameras are provided on the terminal, all the cameras are arranged in a rectangular arrangement, and the four cameras are located at four corner points of the rectangle.
- a pin connected to the synchronization signal line of the main camera is used as a synchronization signal output terminal
- a pin connected to each synchronization signal line of each of the sub cameras is used as a synchronization signal input terminal
- the frame rate of each camera is the same.
- An image processing system is applied to a terminal.
- the terminal is provided with a main camera and at least two sub cameras. All cameras are connected through a synchronization signal line, and the main camera is connected to each of the cameras.
- an image processing system may also have the following additional technical features:
- the one-time optimized image is a depth image.
- the secondary synthesis module includes: a reference selection unit, configured to select one of the primary optimized images as a first reference image according to the second preset rule; and an abnormality acquisition unit, configured to acquire all A target image region having abnormal depth information in the first reference image; a depth calibration unit, configured to obtain target depth information in the same region as the target image region from other one-time optimized images, and use the target depth information to Correcting depth information of the target image area.
- the second preset rule includes any one of the following rules: a rule that preferentially selects an image with the least abnormal depth information; a rule that preferentially selects an image with the largest field of view; a rule that preferentially selects an image with the smallest field of view ; Preferentially select the composite image of the images captured by the specific two cameras; or the rule of preferentially selecting the image with the highest image quality.
- the main camera is a first color camera
- all the sub cameras include at least a black and white camera and a second color camera, and there is a difference in the equivalent focal lengths of the first color camera and the second color camera. .
- the one-time composition module includes a first selection unit configured to, according to the first preset rule, a first color image captured from the first color camera, and a second color image captured by the black and white camera.
- An image is selected from the luminance signal images as a reference image;
- an image splitting unit is configured to split the first color image into a first luminance signal image and a first chrominance signal image;
- a first synthesizing unit is configured to Performing noise reduction synthesis on the first luminance signal image and the second luminance signal image to obtain a noise-reduced third luminance signal image;
- a second synthesis unit configured to combine the first chrominance signal image with the first luminance signal image;
- the third brightness signal image is synthesized to obtain the once optimized image after noise reduction.
- the one-time synthesis module further includes: a second selection unit configured to, according to the first preset rule, a first color image captured from the first color camera, and a second color camera capture One of the second color images is selected as a reference image; the relationship determining unit is configured to determine the first color image and the first color image according to the focal lengths of the first color camera and the second color camera; Subject and background relationship between two color images; an image blurring unit for selecting the first color image and the second color image based on the selected reference image and the determined subject and background relationship Sexual blurring synthesis to obtain a clear one-time optimized image.
- a second selection unit configured to, according to the first preset rule, a first color image captured from the first color camera, and a second color camera capture One of the second color images is selected as a reference image
- the relationship determining unit is configured to determine the first color image and the first color image according to the focal lengths of the first color camera and the second color camera
- Subject and background relationship between two color images an image blurring unit for selecting the first
- the arrangement manner of the cameras on the terminal is any one of the following situations: when two secondary cameras are provided on the terminal, the connection between the main camera and the two secondary cameras When the three sub cameras are provided on the terminal, all the cameras are arranged in a rectangular arrangement, and the four cameras are located at four corner points of the rectangle.
- a pin connected to the synchronization signal line of the main camera is used as a synchronization signal output terminal
- a pin connected to each synchronization signal line of each of the sub cameras is used as a synchronization signal input terminal
- the frame rate of each camera is the same.
- the present invention also proposes a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the image processing method as described above is implemented.
- the present invention also provides a terminal including a memory, a processor, and a computer program stored on the memory and executable on the processor.
- the terminal is provided with a main camera and at least two sub cameras, and all cameras pass a synchronization signal.
- the main camera and each of the sub cameras have a common shooting area, and the processor implements the method as described above when the processor executes the program.
- the image processing method, system, readable storage medium, and terminal described above by arranging multiple secondary cameras and having a common shooting area between the primary camera and each secondary camera, acquire a primary image and multiple images simultaneously. Sub-shooting images, and then synthesizing each sub-shooting image with a main-shooting image to obtain multiple optimized images at once. This time the combination is a dual-shot combination, which can synthesize multiple depth-of-field images and / or at least one clear image. Image and at least one noise reduction image, and then synthesizing multiple once-optimized images to output a second-optimized image.
- the synthesis of multiple depth-of-field images can achieve calibration of depth-of-field information, and clear images and noise reduction
- the images can be superimposed and synthesized with each other, so the image processing method, system, readable storage medium and terminal can output a more accurate depth of field map to avoid error blurring, and at the same time can output low noise and high definition images, which can improve the overall quality of the captured images. Picture quality.
- FIG. 1 is a flowchart of an image processing method in a first embodiment of the present invention
- FIG. 2 is a flowchart of an image processing method in a second embodiment of the present invention.
- FIG. 5 is a flowchart of an image processing method in a third embodiment of the present invention.
- FIG. 6 is a schematic structural diagram of an image processing system in a fourth embodiment of the present invention.
- Image acquisition module 11 One-time synthesis module 12 Secondary Synthesis Module 13 Benchmark Selection Unit 131 Exception Obtaining Unit 132 Depth Calibration Unit 133 First selected unit 121 Second selected unit 125 Image splitting unit 122 First synthesis unit 123 Second synthesis unit 124 Relationship determination unit 126 Image blurring unit 127 Zh Zh
- FIG. 1 shows an image processing method according to a first embodiment of the present invention, which is applied to a terminal.
- the terminal is provided with a main camera and at least two sub cameras, and all cameras are connected through a synchronization signal line. There is a common shooting area between the main camera and each of the sub cameras, and the image processing method includes steps S01 to S03.
- step S01 a main camera image captured by the main camera is acquired, and a sub camera image captured by each of the sub cameras is acquired separately.
- the pins connected to the synchronization signal line of the main camera can be used as the synchronization signal output terminal, and the pins connected to the synchronization signal line of each sub camera are used as the synchronization signal input terminal, and the frame rate of each camera is the same.
- the main camera starts the exposure of a frame of images, it outputs a synchronization signal to trigger the exposure of each sub-camera at the same time, so that the frames output by all cameras are collected simultaneously, providing a basis for subsequent image synthesis.
- the selection type of each camera can be determined according to the requirements of the final output image. For example, the final output image needs to be a more accurate depth map. Then the type of each camera can be a depth camera. The more requirements for output images, the more the number and type of cameras.
- step S02 image synthesis is performed on each of the sub-photograph images and one of the main-photograph images in turn, and one of the images is selected as a reference image in accordance with the first preset rule for each combination to synthesize multiple images once. Optimize the image.
- the reference image refers to the image that is selected as the final optimized result from the two synthesized images. For example, the first image and the second image are synthesized. Assuming that the first image is selected as the reference image, the synthesized output is optimized. First image.
- the main camera and each sub-camera have a common shooting area, the main image and each sub-image acquired simultaneously have the same image area, so each sub-camera can be combined with a
- the main shot image is image synthesized to optimize the same image area to obtain one optimized image at a time.
- images taken by different types of cameras have their own characteristics.
- black and white cameras have the characteristics of low noise
- the combination of images taken by different types of cameras also produces images with different characteristics, such as black and white images taken by black and white cameras.
- the color camera captures and synthesizes color images to obtain low-noise color images to reduce noise in the image. For example, if two images containing depth-of-field information are combined, a depth-of-field image can be synthesized.
- the first preset rule may be any one of the following rules:
- the rule for selecting an image with a larger field of view or a smaller field of view For example, if the field of view of the first image is larger than the field of view of the second image, the first image is selected as the reference image, and the larger the field of view, the larger the image area;
- step S03 according to a second preset rule, a plurality of the primary optimized images are image synthesized to synthesize a secondary optimized image.
- the image synthesis in this step is mainly to integrate the characteristics of each primary optimized image onto a secondary optimized image to obtain a better image effect.
- multiple sub cameras are arranged, and a common shooting area is provided between the main camera and each sub camera, so as to acquire a main image and multiple Sub-shooting images, and then synthesizing each sub-shooting image with a main-shooting image to obtain multiple optimized images at once.
- the combination is a dual-shot combination, which can synthesize multiple depth-of-field images and / or at least one clear image. Image and at least one noise reduction image, and then synthesizing multiple once-optimized images to output a second-optimized image.
- the synthesis of multiple depth-of-field images can achieve calibration of depth-of-field information, and clear images and noise reduction
- the images can be superimposed and synthesized with each other, so the image processing method, system, readable storage medium and terminal can output a more accurate depth of field map to avoid error blurring, and at the same time can output low noise and high definition images, which can improve the overall quality of the captured images. Picture quality.
- FIG. 2 shows an image processing method according to a second embodiment of the present invention, which is applied to a terminal.
- the terminal is provided with a main camera and two sub-cameras, which are three cameras in total, and each camera is capable of shooting.
- a camera that can be used to synthesize images of the depth-of-field image is provided. All cameras are connected through a synchronization signal line, and the main camera and each of the sub cameras have a common shooting area.
- the image processing method includes steps S11 to Step S15.
- FIG. 3 shows the arrangement of the cameras on the terminal in this embodiment. Specifically, the connection between the main camera and the two sub cameras is perpendicular to each other. This vertical arrangement can make The image information of the main camera and each sub-camera is more complementary and more beautiful. At the same time, all cameras and flashes are arranged in a square arrangement and are located on the four corners of the rectangle.
- Step S11 Acquire a main image taken by the main camera, and acquire a sub-image taken by each of the sub-cameras synchronously.
- the pins connected to the synchronization signal line of the main camera can be used as the synchronization signal output terminal, and the pins connected to the synchronization signal line of each sub camera are used as the synchronization signal input terminal, and the frame rate of each camera can be the same.
- each camera is a camera capable of capturing an image that can be used to synthesize a depth of field image
- both the main camera image and the sub-camera obtained by synchronous shooting contain depth information.
- step S12 image synthesis is performed on each of the sub-photograph images and one of the main-photograph images in turn, and one of the images is selected as a reference image in accordance with the first preset rule for each combination to synthesize multiple images once. Optimize the image.
- the one-time optimized image is a depth image, which is synthesized according to the depth information of each image.
- Step S13 According to a second preset rule, select one of the primary optimized images as the first reference image.
- the second preset rule includes any one of the following rules: a rule that preferentially selects the image with the least abnormal depth information; a rule that preferentially selects the image with the largest field of view; and preferentially selects the image with the smallest field of view
- a rule that preferentially selects a composite image of images captured by a specific two cameras for example, always selecting an optimized image that is a combination of images captured by camera A and camera B as the first reference image; or a rule that preferentially selects the image with the highest image quality.
- Step S14 Obtain a target image region with abnormal depth information in the first reference image.
- the target image region with abnormal depth information in the first reference image can be obtained.
- Step S15 Obtain target depth information of the same region as the target image region from the other optimized images, and use the target depth information to correct the depth information of the target image region to obtain a second optimization. image.
- this step is to correct the depth information of the corresponding area of the reference image by obtaining the depth information of the same image area in other images, so that the reference image becomes a more accurate depth map, and then output.
- the terminal may also be provided with a main camera and three sub-cameras, which have four cameras as a whole, and each camera may be arranged in a square arrangement for all cameras. Arrangement, the flash is arranged at the center of the area surrounded by each camera.
- the image processing method in the present invention is not limited to three shots and four shots. In other embodiments, it may be five shots or more, which can be determined according to the requirements of the final output image.
- the image processing method in the above embodiments of the present invention can realize self-calibration of the depth of field map during execution, and can output a more accurate depth of field map.
- FIG. 5 shows an image processing method according to a third embodiment of the present invention, which is applied to a terminal.
- the terminal is provided with a main camera and two sub cameras.
- the main camera is a first color camera.
- the secondary cameras are respectively a black and white camera and a second color camera. There is a difference in the equivalent focal length of the first color camera and the second color camera. All cameras are connected through a synchronization signal line, and the main camera There is a common shooting area with each of the secondary cameras, and the image processing method includes steps S21 to S29.
- Step S21 Acquire a main image taken by the main camera, and acquire a sub-image taken by each of the sub-cameras synchronously.
- the first color camera and the second color camera are both color imaging cameras, and the captured images are color images.
- the color images generally have brightness information and chrominance information that can be split.
- Black and white cameras are black and white imaging cameras.
- the captured image is a black and white image, and the black and white image has only brightness information, low noise, and good image stability. Therefore, the main image and one of the sub-images are color images, and the other sub-image is a black and white image.
- Step S22 According to a first preset rule, an image is selected as a reference image from a first color image captured by the first color camera and a second brightness signal image captured by the black and white camera.
- Step S23 Split the first color image into a first luminance signal image and a first chrominance signal image.
- Step S24 Based on the selected reference image, perform noise reduction synthesis on the first luminance signal image and the second luminance signal image to obtain a third luminance signal image after noise reduction.
- a process of performing noise reduction combining the first luminance signal image and the second luminance signal image captured by the black-and-white camera may be calculating the first luminance signal image and the second luminance signal image.
- the average pixel value of the same pixel point is then used as the final pixel value after the pixel point is synthesized, and the final pixel value is rendered on the reference image to obtain a third brightness signal image.
- the second brightness signal image is a black and white image taken by a black and white camera, and because the second brightness signal has low noise, the first brightness signal image and all After the second luminance signal image is synthesized, the obtained third luminance signal image has a characteristic of low noise.
- step S25 based on the selected reference image, the first chrominance signal image and the third luminance signal image are synthesized to obtain a one-time optimized image after noise reduction.
- Step S26 According to the first preset rule, select one image as a reference image from the first color image and the second color image captured by the second color camera;
- Step S27 Determine the subject and background relationship between the first color image and the second color image according to the focal lengths of the first color camera and the second color camera.
- the focal length of the lens can determine what the first color image and the second color image are as the subject image and what is the background image.
- step S28 based on the selected reference image and the determined relationship between the subject and the background, the first color image and the second color image are selectively blurred and synthesized to obtain a clear primary optimized image.
- the selective blurring may be any one of a blurring background, a blurring subject, or a certain image area, which may be preset or determined by the terminal processor.
- steps S22 to S25 are synthesize the images captured by the first color camera and the black and white camera once to optimize the images.
- the above steps S26 to S28 are mainly for synthesizing the images captured by the first color camera and the second color camera into an optimized image.
- steps S22-S25 are arranged and executed before steps S26-S28, but the present invention is not limited thereto.
- steps S26-S28 can also be arranged and executed before steps S22-S25, or synchronized carried out.
- Step S29 According to a second preset rule, image synthesis is performed on a plurality of the primary optimized images to synthesize a secondary optimized image.
- the second preset rule may be: firstly selecting a one-time optimized image as a reference image, and then performing multiple pixel synthesis on the same image area for multiple one-time optimized images and rendering them to corresponding areas of the reference image, Get the final output image.
- the image processing method in the above embodiments of the present invention can support effects such as optical zoom and color + black and white noise reduction during execution, and can output low noise and high definition images.
- the terminal may also be provided with a main camera and three sub-cameras, which are four cameras in total.
- the arrangement of each camera may be arranged with reference to FIG. 4, and is not described herein again.
- the image processing method in the present invention is not limited to three-shot and four-shot. In other embodiments, it may be five-shot or more, which can be determined according to the requirements of the final output image.
- FIG. 6 shows an image processing system in a fourth embodiment of the present invention, which is applied to a terminal.
- the terminal is provided with a main camera and at least two cameras.
- Sub-cameras all cameras are connected by a synchronization signal line, and the main camera and each of the sub-cameras have a common shooting area, the system includes:
- An image acquisition module 11 configured to acquire a main image captured by the main camera, and respectively acquire a secondary image captured by each of the secondary cameras synchronously;
- a one-time composition module 12 is configured to sequentially synthesize images of each of the sub-photograph images and one of the main-photograph images, and select one of the images as a reference image according to a first preset rule for each combination, and Synthesize multiple optimized images at once;
- the secondary synthesis module 13 is configured to perform image synthesis on a plurality of the primary optimized images according to a second preset rule to synthesize a secondary optimized image.
- the one-time optimized image is a depth image.
- the secondary synthesis module 13 includes:
- a reference selecting unit 131 configured to select one of the once optimized images as a first reference image according to the second preset rule
- An abnormality obtaining unit 132 configured to obtain a target image region in which the depth information in the first reference image is abnormal
- a depth calibration unit 133 is configured to obtain target depth information of the same area as the target image area from among the other optimized images, and use the target depth information to correct the depth information of the target image area.
- the second preset rule includes any one of the following rules: a rule that preferentially selects an image with the least abnormal depth information; a rule that preferentially selects an image with the largest field of view; a rule that preferentially selects an image with the smallest field of view ; Preferentially select the composite image of the images captured by the specific two cameras; or the rule of preferentially selecting the image with the highest image quality.
- the main camera is a first color camera
- all the sub cameras include at least a black and white camera and a second color camera, and there is a difference in the equivalent focal lengths of the first color camera and the second color camera. .
- the primary synthesis module 12 includes:
- a first selecting unit 121 configured to select an image from a first color image captured by the first color camera and a second brightness signal image captured by the black and white camera according to the first preset rule As a reference image;
- An image splitting unit 122 configured to split the first color image into a first luminance signal image and a first chrominance signal image
- a first combining unit 123 configured to perform noise reduction synthesis on the first brightness signal image and the second brightness signal image to obtain a third brightness signal image after noise reduction;
- the second synthesizing unit 124 is configured to synthesize the first chrominance signal image and the third luminance signal image to obtain the primary optimized image after noise reduction.
- the one-time synthesis module 12 further includes:
- the second selecting unit 125 is configured to select one of a first color image captured by the first color camera and a second color image captured by the second color camera according to the first preset rule. Image as the reference image;
- a relationship determining unit 126 configured to determine a subject and background relationship between the first color image and the second color image according to a focal length of the first color camera and the second color camera;
- An image blurring unit 127 is configured to selectively blur the first color image and the second color image based on the selected reference image and the determined relationship between the subject and the background, so as to obtain a clear image. Describe the optimized image once.
- the arrangement manner of the cameras on the terminal is any one of the following situations:
- connection between the main camera and the two secondary cameras is perpendicular to each other;
- all cameras are arranged in a rectangular arrangement, and four cameras are located at four corner points of the rectangle.
- a pin connected to the synchronization signal line of the main camera is used as a synchronization signal output terminal
- a pin connected to each synchronization signal line of each of the sub cameras is used as a synchronization signal input terminal
- the frame rate of each camera is the same.
- the present invention also proposes a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the image processing method as described above is implemented.
- the present invention also provides a terminal including a memory, a processor, and a computer program stored on the memory and executable on the processor.
- the terminal is provided with a main camera and at least two sub cameras, and all cameras pass a synchronization signal.
- the shooting area of the main camera is equal to or includes the shooting area of the sub camera, and the processor implements the method as described above when the processor executes the program.
- the terminal includes, but is not limited to, a mobile phone, a computer, a tablet, a smart TV, a security device, a smart wearable device, and the like.
- a "computer-readable medium” may be any device that can contain, store, communicate, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device.
- computer readable media include the following: electrical connections (electronic devices) with one or more wirings, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read-only memory (ROM), erasable and editable read-only memory (EPROM or flash memory), fiber optic devices, and portable optical disk read-only memory (CDROM).
- the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because, for example, by optically scanning the paper or other medium, followed by editing, interpretation, or other suitable Processing to obtain the program electronically and then store it in computer memory.
- each part of the present invention may be implemented by hardware, software, firmware, or a combination thereof.
- multiple steps or methods may be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
- a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it may be implemented using any one or a combination of the following techniques known in the art: Discrete logic circuits, application specific integrated circuits with suitable combinational logic gate circuits, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
- An image processing method is applied to a terminal.
- the terminal is provided with a main camera and at least two sub cameras, all cameras are connected by a synchronization signal line, and between the main camera and each of the sub cameras Both have a common shooting area, the method includes:
- a plurality of the primary optimized images are image synthesized to synthesize a secondary optimized image.
- A2 The image processing method according to A1, wherein the one-time optimized image is a depth image.
- the image processing method according to A2, wherein the step of performing image synthesis on the plurality of optimized images at one time includes:
- the second preset rule includes any one of the following rules:
- the main camera is a first color camera
- all the sub cameras include at least a black and white camera and a second color camera, the first color camera and the second camera There is a difference in the equivalent focal length of color cameras.
- the step of synthesizing the images optimized by the first color camera and the black and white camera into the one-time optimized image includes:
- the step of synthesizing the images optimized by the first color camera and the second color camera into the one-time optimized image includes:
- the arrangement manner of the cameras on the terminal is any one of the following situations:
- connection between the main camera and the two secondary cameras is perpendicular to each other;
- all cameras are arranged in a rectangular arrangement, and four cameras are located at four corner points of the rectangle.
- A9 The image processing method according to A1, wherein a pin connected to the synchronization signal line of the main camera is used as a synchronization signal output terminal, and a pin connected to the synchronization signal line of each of the sub cameras is used as a synchronization signal input. And the frame rate of each camera is the same.
- An image processing system applied to a terminal the terminal is provided with a main camera and at least two sub cameras, all cameras are connected by a synchronization signal line, and between the main camera and each of the sub cameras Both have a common shooting area, and the system includes:
- An image acquisition module configured to acquire a main image captured by the main camera, and respectively acquire a secondary image captured by each of the secondary cameras synchronously;
- a one-time composition module configured to sequentially synthesize each of the sub-photograph images and one of the main-photograph images, and select one of the images as a reference image according to a first preset rule for each composition to synthesize Optimize multiple images at once;
- a secondary synthesis module is configured to perform image synthesis on a plurality of the primary optimized images according to a second preset rule to synthesize one secondary optimized image.
- a reference selecting unit configured to select one of the once optimized images as a first reference image according to the second preset rule
- An abnormality obtaining unit configured to obtain a target image region in which depth information is abnormal in the first reference image
- a depth calibration unit is configured to obtain target depth information of the same area as the target image area from among the other optimized images, and use the target depth information to correct the depth information of the target image area.
- the main camera is a first color camera
- all the sub cameras include at least a black and white camera and a second color camera, the first color camera and the second camera There is a difference in the equivalent focal length of color cameras.
- a first selecting unit configured to select an image from a first color image captured by the first color camera and a second brightness signal image captured by the black and white camera according to the first preset rule Reference image
- An image splitting unit configured to split the first color image into a first luminance signal image and a first chrominance signal image
- a first synthesis unit configured to perform noise reduction synthesis on the first brightness signal image and the second brightness signal image to obtain a third brightness signal image after noise reduction
- a second synthesizing unit is configured to synthesize the first chrominance signal image and the third luminance signal image to obtain the primary optimized image after noise reduction.
- the one-time synthesis module further includes:
- a second selection unit configured to select an image from a first color image captured by the first color camera and a second color image captured by the second color camera according to the first preset rule As a reference image;
- a relationship determining unit configured to determine a subject and background relationship between the first color image and the second color image according to a focal length of the first color camera and the second color camera;
- An image blurring unit configured to selectively blur the first color image and the second color image based on the selected reference image and the determined relationship between the subject and the background, so as to obtain a clear image Optimize the image at once.
- connection between the main camera and the two secondary cameras is perpendicular to each other;
- all cameras are arranged in a rectangular arrangement, and four cameras are located at four corner points of the rectangle.
- a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method according to any of A1 to A9.
- a terminal comprising a memory, a processor, and a computer program stored on the memory and executable on the processor.
- the terminal is provided with a main camera and at least two sub cameras, and all cameras are connected through a synchronization signal line And the main camera and each of the sub cameras have a common shooting area, and when the processor executes the program, the method according to any one of A1 to A9 is implemented.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
本发明提供一种图像处理方法、系统、可读存储介质及终端,该方法应用于终端,终端上设有一主摄像头及至少两个副摄像头,所有摄像头通过一同步信号线连接,且各摄像头之间具有共同的拍摄区域,该方法包括:获取各摄像头同步拍摄的图像;依次将每张副摄图像与一张主摄图像进行图像合成,且每次合成均按照第一预设规则选定其中一张图像作为基准图像,以合成多张一次优化图像;根据第二预设规则,将多张一次优化图像进行图像合成,以合成一张二次优化图像。在本发明当中的图像处理方法、系统、可读存储介质及终端,通过布置多个副摄像头,并采用多次图像合成,可输出更精确的景深图,避免错误虚化,同时也可以输出低噪声且高清晰图像。
Description
本发明涉及多摄像头成像技术领域,特别涉及一种图像处理方法、系统、可读存储介质及终端。
随着人们生活水平的不断提高,终端在不断的普及,例如手机、平板、电脑、摄像机等。目前几乎所有的智能设备都集成了摄像镜头,以实现该设备的摄像功能。
近年来,随着用户对终端拍照相质要求的不断提高,诞生了一批配置了前置双摄和后置双摄的终端,双摄像头可获得图像的景深,实现背景虚化,或可支持光学变焦,彩色+黑白降噪等效果。目前的终端正在朝向这种多摄像头的趋势进行发展。
然而,现有技术当中,目前的双摄像头虽然能够获得图像的景深,但无法校准,导致背景虚化的效果不佳,比如人脸、头发等在对焦距离的物体,被错误的虚化了,同时双摄像头无法同时支持光学变焦和图像降噪两种效果,导致无法得到低噪声且高清晰的图像。
【发明内容】
基于此,本发明的目的是提供一种图像处理方法、系统、可读存储介质及终端,以解决现有双摄像头无法得到低噪声且高清晰图像的技术问题。
根据本发明实施例的一种图像处理方法,应用于终端,所述终端上设有一主摄像头及至少两个副摄像头,所有摄像头通过一同步信号线连接,且所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述方法包括:获取所述主摄像头拍摄的主摄图像,并分别获取每个所述副摄像头同步拍摄的副摄图像;依次将每张所述副摄图像与一张所述主摄图像进行图像合成,且每次合成均按照第一预设规则选定其中一张图像作为基准图像,以合成多张一次优化图像;根据第二预设规则,将多张所述一次优化图像进行图像合成,以合成一张二次优化图像。
另外,根据本发明上述实施例的一种图像处理方法,还可以具有如下附加 的技术特征:
进一步地,所述一次优化图像为深度图像。
进一步地,所述将多张所述一次优化图像进行图像合成的步骤包括:按照所述第二预设规则,选定一张所述一次优化图像作为第一基准图像;获取所述第一基准图像中深度信息异常的目标图像区域;从其它所述一次优化图像当中获取与所述目标图像区域相同区域的目标深度信息,并利用所述目标深度信息来校正所述目标图像区域的深度信息。
进一步地,所述预设规则包括以下规则当中的任意一种:优先选择含异常深度信息最少的图像的规则;优先选择视场最大的图像的规则;优先选择视场最小的图像的规则;优先选择特定两个摄像头拍摄图像的合成图像;或优先选择图像质量最高的图像的规则。
进一步地,所述主摄像头为第一彩色摄像头,所有所述副摄像头当中至少包括一黑白摄像头及一第二彩色摄像头,所述第一彩色摄像头和所述第二彩色摄像头的等效焦距存在差异。
进一步地,将所述第一彩色摄像头和所述黑白摄像头拍摄的图像合成所述一次优化图像的步骤包括:按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述黑白摄像头拍摄的第二亮度信号图像当中选定一张图像作为基准图像;将所述第一彩色图像拆分成第一亮度信号图像和第一色度信号图像;基于选定的基准图像,将所述第一亮度信号图像和所述第二亮度信号图像进行降噪合成,以得到降噪后的第三亮度信号图像;将所述第一色度信号图像与所述第三亮度信号图像合成,得到降噪后的所述一次优化图像。
进一步地,将所述第一彩色摄像头和所述第二彩色摄像头拍摄的图像合成所述一次优化图像的步骤包括:按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述第二彩色摄像头拍摄的第二彩色图像当中选定一张图像作为基准图像;根据所述第一彩色摄像头和所述第二彩色摄像头的焦距大小,确定所述第一彩色图像和所述第二彩色图像之间的主体和背景关系;基于选定的基准图像及确定的所述主体和背景关系,将所述第一彩色图像和所述第二彩色图像进行选择性虚化合成,以得到清晰的所述一次优化图像。
进一步地,所述终端上各摄像头的布置方式为以下情况当中的任意一种:当所述终端上设有两个所述副摄像头时,所述主摄像头与两个所述副摄像头的连线相互垂直;当所述终端上设有三个所述副摄像头时,所有摄像头呈矩形排 列布置,且四个摄像头位于矩形的四个角点上。
进一步地,所述主摄像头与所述同步信号线连接的引脚作为同步信号输出端,各个所述副摄像头与所述同步信号线连接的引脚作为同步信号输入端,且各摄像头的帧率相同。
根据本发明实施例的一种图像处理系统,应用于终端,所述终端上设有一主摄像头及至少两个副摄像头,所有摄像头通过一同步信号线连接,且所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述系统包括:图像获取模块,用于获取所述主摄像头拍摄的主摄图像,并分别获取每个所述副摄像头同步拍摄的副摄图像;一次合成模块,用于依次将每张所述副摄图像与一张所述主摄图像进行图像合成,且每次合成均按照第一预设规则选定其中一张图像作为基准图像,以合成多张一次优化图像;二次合成模块,用于根据第二预设规则,将多张所述一次优化图像进行图像合成,以合成一张二次优化图像。
另外,根据本发明上述实施例的一种图像处理系统,还可以具有如下附加的技术特征:
进一步地,所述一次优化图像为深度图像。
进一步地,所述二次合成模块包括:基准选定单元,用于按照所述第二预设规则,选定一张所述一次优化图像作为第一基准图像;异常获取单元,用于获取所述第一基准图像中深度信息异常的目标图像区域;深度校准单元,用于从其它所述一次优化图像当中获取与所述目标图像区域相同区域的目标深度信息,并利用所述目标深度信息来校正所述目标图像区域的深度信息。
进一步地,所述第二预设规则包括以下规则当中的任意一种:优先选择含异常深度信息最少的图像的规则;优先选择视场最大的图像的规则;优先选择视场最小的图像的规则;优先选择特定两个摄像头拍摄图像的合成图像;或优先选择图像质量最高的图像的规则。
进一步地,所述主摄像头为第一彩色摄像头,所有所述副摄像头当中至少包括一黑白摄像头及一第二彩色摄像头,所述第一彩色摄像头和所述第二彩色摄像头的等效焦距存在差异。
进一步地,所述一次合成模块包括:第一选定单元,用于按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述黑白摄像头拍摄的第二亮度信号图像当中选定一张图像作为基准图像;图像拆分单元,用于将所述第一彩色图像拆分成第一亮度信号图像和第一色度信号图像;第一合成单 元,用于将所述第一亮度信号图像和所述第二亮度信号图像进行降噪合成,以得到降噪后的第三亮度信号图像;第二合成单元,用于将所述第一色度信号图像与第三亮度信号图像合成,得到降噪后的所述一次优化图像。
进一步地,所述一次合成模块还包括:第二选定单元,用于按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述第二彩色摄像头拍摄的第二彩色图像当中选定一张图像作为基准图像;关系确定单元,用于根据所述第一彩色摄像头和所述第二彩色摄像头的焦距大小,确定所述第一彩色图像和所述第二彩色图像之间的主体和背景关系;图像虚化单元,用于基于选定的基准图像及确定的所述主体和背景关系,将所述第一彩色图像和所述第二彩色图像进行选择性虚化合成,以得到清晰的所述一次优化图像。
进一步地,所述终端上各摄像头的布置方式为以下情况当中的任意一种:当所述终端上设有两个所述副摄像头时,所述主摄像头与两个所述副摄像头的连线相互垂直;当所述终端上设有三个所述副摄像头时,所有摄像头呈矩形排列布置,且四个摄像头位于矩形的四个角点上。
进一步地,所述主摄像头与所述同步信号线连接的引脚作为同步信号输出端,各个所述副摄像头与所述同步信号线连接的引脚作为同步信号输入端,且各摄像头的帧率相同。
本发明还提出一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上述的图像处理方法。
本发明还提出一种终端,包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述终端上设有一主摄像头及至少两个副摄像头,所有摄像头通过一同步信号线连接,且所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述处理器执行所述程序时实现如上述的方法。
上述图像处理方法、系统、可读存储介质及终端,通过布置多个副摄像头,并使主摄像头与每个副摄像头之间均具有共同的拍摄区域,以同步获取一张主摄图像和多种副摄图像,然后将每张副摄图像与一张主摄图像进行图像合成,以得到多张一次优化图像,此次合成为双摄合成,可合成多张景深图和/或至少一张清晰图及至少一张降噪图,然后在将多张一次优化图像进行二次合成,以输出一张二次优化图像,由于多张景深图合成可实现景深信息的校准,且清晰图和降噪图可相互叠加合成,因此所述图像处理方法、系统、可读存储介质及终端可输出更精确的景深图,避免错误虚化,同时也可以输出低噪声且高清晰 图像,整体提高拍摄图像的画质。
图1为本发明第一实施例中的图像处理方法的流程图;
图2为本发明第二实施例中的图像处理方法的流程图;
图3为本发明第二实施例中的各摄像头的布置结构;
图4为本发明另一实施例中的各摄像头的布置结构;
图5为本发明第三实施例中的图像处理方法的流程图;
图6为本发明第四实施例中的图像处理系统的结构示意图。
主要元件符号说明:
图像获取模块 | 11 | 一次合成模块 | 12 |
二次合成模块 | 13 | 基准选定单元 | 131 |
异常获取单元 | 132 | 深度校准单元 | 133 |
第一选定单元 | 121 | 第二选定单元 | 125 |
图像拆分单元 | 122 | 第一合成单元 | 123 |
第二合成单元 | 124 | 关系确定单元 | 126 |
图像虚化单元 | 127 |
以下具体实施方式将结合上述附图进一步说明本发明。
为了便于理解本发明,下面将参照相关附图对本发明进行更全面的描述。附图中给出了本发明的若干实施例。但是,本发明可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本发明的公开内容更加透彻全面。
需要说明的是,当元件被称为“固设于”另一个元件,它可以直接在另一个元件上或者也可以存在居中的元件。当一个元件被认为是“连接”另一个元件,它可以是直接连接到另一个元件或者可能同时存在居中元件。本文所使用的术语“垂直的”、“水平的”、“左”、“右”以及类似的表述只是为了说明的目的。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术 语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
请参阅图1,所示为本发明第一实施例中的图像处理方法,应用于终端,所述终端上设有一主摄像头及至少两个副摄像头,所有摄像头通过一同步信号线连接,所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述图像处理方法包括步骤S01至步骤S03。
步骤S01,获取所述主摄像头拍摄的主摄图像,并分别获取每个所述副摄像头同步拍摄的副摄图像。
在具体实施时,可将主摄像头与同步信号线连接的引脚作为同步信号输出端,各个副摄像头与同步信号线连接的引脚作为同步信号输入端,且使各摄像头的帧率相同,在主摄像头开始一帧图像的曝光时,输出同步信号,以触发各个副摄像头同时开始曝光,以使得所有摄像头每一帧输出的画面都是同时采集的,为后续图像合成提供基础。
除此之外,还需要说明的是,各摄像头的选用类型可针对最终输出图像的要求来定,例如最终输出图像需为更精确的景深图,那么各摄像头的类型可选择为深度摄像头,最终输出图像的要求越多,摄像头数量及类型越多。
步骤S02,依次将每张所述副摄图像与一张所述主摄图像进行图像合成,且每次合成均按照第一预设规则选定其中一张图像作为基准图像,以合成多张一次优化图像。
其中,基准图像是指两张合成图像当中被选定为最终得到优化结果的图像,例如第一图像和第二图像进行合成,假设选定第一图像为基准图像,则合成输出为优化后的第一图像。
需要指出的是,由于主摄像头与每一个副摄像头均具有共同的拍摄区域,同步获取的主摄图像和各个副摄图像之间均具有相同的图像区域,因此每张副摄像头都可以和一张主摄图像进行图像合成,以对相同的图像区域进行优化,以得到一张一次优化图像。
同时,不同类型的摄像头所拍摄的图像各有特点,例如黑白摄像头具有噪点低的特点,而不同类型的摄像头拍摄图像之间的合成也会产生不同特点的图像,例如黑白摄像头拍摄的黑白图像与彩色摄像头拍摄彩色图像进行合成,可得到噪点低的彩色图像,实现对图像的降噪;再如含有景深信息的两张图像进行合成,可合成景深图。
在具体实施时,所述第一预设规则可以为以下规则当中的任意一种:
选择视场更大或视场更小的图像的规则,例如第一图像视场大于第二图像视场,则选择第一图像作为基准图像,同时视场越大,图像区域越大;
选择特定摄像头拍摄的图像的规则,例如始终选择主摄像头拍摄的图像作为基准图像;
选择图像质量更好的图像的规则。
步骤S03,根据第二预设规则,将多张所述一次优化图像进行图像合成,以合成一张二次优化图像。
需要指出的是,本步骤中的图像合成,主要为将各个一次优化图像的特点集成到一张二次优化图像上,以得到更好的图像效果。
综上,本发明上述实施例当中的图像处理方法,通过布置多个副摄像头,并使主摄像头与每个副摄像头之间均具有共同的拍摄区域,以同步获取一张主摄图像和多种副摄图像,然后将每张副摄图像与一张主摄图像进行图像合成,以得到多张一次优化图像,此次合成为双摄合成,可合成多张景深图和/或至少一张清晰图及至少一张降噪图,然后在将多张一次优化图像进行二次合成,以输出一张二次优化图像,由于多张景深图合成可实现景深信息的校准,且清晰图和降噪图可相互叠加合成,因此所述图像处理方法、系统、可读存储介质及终端可输出更精确的景深图,避免错误虚化,同时也可以输出低噪声且高清晰图像,整体提高拍摄图像的画质。
请参阅图2,所示为本发明第二实施例中的图像处理方法,应用于终端,所述终端上设有一主摄像头及两个副摄像头,整体为三摄,且各摄像头均为能够拍摄出可用于合成景深图的图像的摄像头,所有摄像头通过一同步信号线连接,且所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述图像处理方法包括步骤S11至步骤S15。
其中,请参阅图3,所示为本实施例当中所述终端上各摄像头的布置方式,具体为所述主摄像头与两个所述副摄像头的连线相互垂直,这种垂直布置方式可以使主摄像头和每个副摄像头的图像信息更具有互补性,且更加美观。同时,所有摄像头和闪光灯呈方形排列布置,并位于矩形的四个角点上。
步骤S11,获取所述主摄像头拍摄的主摄图像,并分别获取每个所述副摄像头同步拍摄的副摄图像。
在具体实施时,可将主摄像头与同步信号线连接的引脚作为同步信号输出 端,各个副摄像头与同步信号线连接的引脚作为同步信号输入端,且使各摄像头的帧率相同。
需要指出的是,由于各摄像头均为能够拍摄出可用于合成景深图的图像的摄像头,故同步拍摄得到的主摄图像和副摄像头均含有深度信息。
步骤S12,依次将每张所述副摄图像与一张所述主摄图像进行图像合成,且每次合成均按照第一预设规则选定其中一张图像作为基准图像,以合成多张一次优化图像。
其中,所述一次优化图像为深度图像,其是根据各图像的深度信息合成而来。
步骤S13,按照第二预设规则,选定一张所述一次优化图像作为第一基准图像。
在具体实施时,所述第二预设规则包括以下规则当中的任意一种:优先选择含异常深度信息最少的图像的规则;优先选择视场最大的图像的规则;优先选择视场最小的图像的规则;优先选择特定两个摄像头拍摄图像的合成图像,例如始终选择摄像头A和摄像头B拍摄图像合成的一次优化图像作为第一基准图像;或优先选择图像质量最高的图像的规则。
步骤S14,获取所述第一基准图像中深度信息异常的目标图像区域。
需要指出的是,在深度图像当中,若某一图像区域的出现景深信息错误或景深信息无法确认等异常现象,通常会采用特定的方式进行突显,例如特定颜色、圈中等,因此根据各区域显示的方式即可获取到第一基准图像中深度信息异常的目标图像区域。
步骤S15,从其它所述一次优化图像当中获取与所述目标图像区域相同区域的目标深度信息,并利用所述目标深度信息来校正所述目标图像区域的深度信息,以得到一张二次优化图像。
可以理解的,本步骤的目的在于,通过在其它图像中获取相同图像区域的深度信息来修正基准图像对应区域的深度信息,以使基准图像成为更精确的景深图,然后输出。
除此之外,请参阅图4,在另一实施例当中,所述终端还可以设置一主摄像头及三个副摄像头,整体为四摄,各摄像头的布置方式还可以为所有摄像头呈方形排列布置,闪光灯布置在各摄像头围成区域的中心位置处。但需要说明的是,本发明中图像处理方法,不限于三摄和四摄,在其它实施例当中,还可以 为五摄及以上,具体可根据最终输出图像的要求来定。
综上,本发明上述实施例当中的图像处理方法,其在执行时可实现对景深图的自行校准,能够输出更加精确的景深图。
请参阅图5,所示为本发明第三实施例中的图像处理方法,应用于终端,所述终端上设有一主摄像头及两个副摄像头,所述主摄像头为第一彩色摄像头,两个所述副摄像头分别为一黑白摄像头及一第二彩色摄像头,所述第一彩色摄像头和所述第二彩色摄像头的等效焦距存在差异,所有摄像头通过一同步信号线连接,且所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述图像处理方法包括步骤S21至步骤S29。
其中,本实施例当中的所述终端上各摄像头的布置方式可参阅图3进行布置,在此不在赘述。
步骤S21,获取所述主摄像头拍摄的主摄图像,并分别获取每个所述副摄像头同步拍摄的副摄图像。
需要指出的是,第一彩色摄像头和第二彩色摄像头均为彩色成像摄像头,其拍摄图像为彩色图像,彩色图像一般具有可进行拆分的亮度信息和色度信息,黑白摄像头为黑白成像摄像头,其拍摄图像为黑白图像,而黑白图像只有亮度信息,且噪点低,图像稳定性好,因此主摄图像和其中一张副摄图像为彩色图像,另一张副摄图像为黑白图像。
步骤S22,按照第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述黑白摄像头拍摄的第二亮度信号图像当中选定一张图像作为基准图像。
步骤S23,将所述第一彩色图像拆分成第一亮度信号图像和第一色度信号图像。
步骤S24,基于选定的基准图像,将所述第一亮度信号图像和所述第二亮度信号图像进行降噪合成,以得到降噪后的第三亮度信号图像。
在具体实施时,将所述第一亮度信号图像和所述黑白摄像头拍摄的第二亮度信号图像进行降噪合成的过程可以为,计算所述第一亮度信号图像和所述第二亮度信号图像同一像素点的像素均值,然后作为该像素点的合成后的最终像素值,并将最终像素值渲染到基准图像上,以得到第三亮度信号图像。
需要指出的是,由于黑白图像只有亮度信息,故所述第二亮度信号图像即为黑白摄像头拍摄的黑白图像,同时由于所述第二亮度信号噪点低,因此所述第一亮度信号图像和所述第二亮度信号图像进行合成后,所得到的第三亮度信 号图像具有噪点低的特点。
步骤S25,基于选定的基准图像,将所述第一色度信号图像与所述第三亮度信号图像合成,得到降噪后的一张一次优化图像。
可以理解的,将所述第一色度信号图像和所述第三亮度信号图像进行合成,将得到一张彩色图像,由于所述第三亮度信号图像噪点低,从而整体得到噪声低的彩色图像,实现对图像的降噪。
步骤S26,按照所述第一预设规则,从所述第一彩色图像和所述第二彩色摄像头拍摄的第二彩色图像当中选定一张图像作为基准图像;
步骤S27,根据所述第一彩色摄像头和所述第二彩色摄像头的焦距大小,确定所述第一彩色图像和所述第二彩色图像之间的主体和背景关系。
可以理解的,焦距越大,摄像头拍摄越远,通常用于捕捉背景,反之焦距越小,摄像头拍摄越近,通常用于捕捉主体,因此根据所述第一彩色摄像头和所述第二彩色摄像头的焦距大小,即可确定第一彩色图像和第二彩色图像何为主体图像,何为背景图像。
步骤S28,基于选定的基准图像及确定的所述主体和背景关系,将所述第一彩色图像和所述第二彩色图像进行选择性虚化合成,以得到清晰的所述一次优化图像。
其中,所述选择性虚化可以为虚化背景、虚化主体或虚化某一图像区域当中的任意一种,具体可进行预设,或终端处理器自行确定。
需要指出的是,上述步骤S22至步骤S25主要目的在于,将所述第一彩色摄像头和所述黑白摄像头拍摄的图像合成一次优化图像。而上述步骤S26至步骤S28主要目的在于,将所述第一彩色摄像头和所述第二彩色摄像头拍摄的图像合成一次优化图像。在本实施例当中,步骤S22-S25布置在,步骤S26-S28之前执行,但本发明不限于此,在其它实施例当中,步骤S26-S28还可以布置在步骤S22-S25之前执行,或同步执行。
步骤S29,根据第二预设规则,将多张所述一次优化图像进行图像合成,以合成一张二次优化图像。
其中,所述第二预设规则可以为,先选定一张一次优化图像作为基准图像,然后将多张一次优化图像进行相同图像区域的像素合成,并渲染到基准图像的对应区域上,以得到最终输出图像。
综上,本发明上述实施例当中的图像处理方法,其在执行时可支持光学变 焦和彩色+黑白降噪等效果,能够输出低噪声且高清晰的图像。
除此之外,在另一实施例当中,所述终端还可以设置一主摄像头及三个副摄像头,整体为四摄,各摄像头的布置方式可参阅图4进行布置,在此不在赘述。但需要说明的是,本发明中图像处理方法,不限于三摄和四摄,在其它实施例当中,还可以为五摄及以上,具体可根据最终输出图像的要求来定。
本发明另一方面还提供一种图像处理系统,所述请查阅图6,所示为本发明第四实施例中的图像处理系统,应用于终端,所述终端上设有一主摄像头及至少两个副摄像头,所有摄像头通过一同步信号线连接,且所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述系统包括:
图像获取模块11,用于获取所述主摄像头拍摄的主摄图像,并分别获取每个所述副摄像头同步拍摄的副摄图像;
一次合成模块12,用于依次将每张所述副摄图像与一张所述主摄图像进行图像合成,且每次合成均按照第一预设规则选定其中一张图像作为基准图像,以合成多张一次优化图像;
二次合成模块13,用于根据第二预设规则,将多张所述一次优化图像进行图像合成,以合成一张二次优化图像。
进一步地,所述一次优化图像为深度图像。
进一步地,所述二次合成模块13包括:
基准选定单元131,用于按照所述第二预设规则,选定一张所述一次优化图像作为第一基准图像;
异常获取单元132,用于获取所述第一基准图像中深度信息异常的目标图像区域;
深度校准单元133,用于从其它所述一次优化图像当中获取与所述目标图像区域相同区域的目标深度信息,并利用所述目标深度信息来校正所述目标图像区域的深度信息。
进一步地,所述第二预设规则包括以下规则当中的任意一种:优先选择含异常深度信息最少的图像的规则;优先选择视场最大的图像的规则;优先选择视场最小的图像的规则;优先选择特定两个摄像头拍摄图像的合成图像;或优先选择图像质量最高的图像的规则。
进一步地,所述主摄像头为第一彩色摄像头,所有所述副摄像头当中至少包括一黑白摄像头及一第二彩色摄像头,所述第一彩色摄像头和所述第二彩色 摄像头的等效焦距存在差异。
进一步地,所述一次合成模块12包括:
第一选定单元121,用于按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述黑白摄像头拍摄的第二亮度信号图像当中选定一张图像作为基准图像;
图像拆分单元122,用于将所述第一彩色图像拆分成第一亮度信号图像和第一色度信号图像;
第一合成单元123,用于将所述第一亮度信号图像和所述第二亮度信号图像进行降噪合成,以得到降噪后的第三亮度信号图像;
第二合成单元124,用于将所述第一色度信号图像与第三亮度信号图像合成,得到降噪后的所述一次优化图像。
进一步地,所述一次合成模块12还包括:
第二选定单元125,用于按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述第二彩色摄像头拍摄的第二彩色图像当中选定一张图像作为基准图像;
关系确定单元126,用于根据所述第一彩色摄像头和所述第二彩色摄像头的焦距大小,确定所述第一彩色图像和所述第二彩色图像之间的主体和背景关系;
图像虚化单元127,用于基于选定的基准图像及确定的所述主体和背景关系,将所述第一彩色图像和所述第二彩色图像进行选择性虚化合成,以得到清晰的所述一次优化图像。
进一步地,所述终端上各摄像头的布置方式为以下情况当中的任意一种:
当所述终端上设有两个所述副摄像头时,所述主摄像头与两个所述副摄像头的连线相互垂直;
当所述终端上设有三个所述副摄像头时,所有摄像头呈矩形排列布置,且四个摄像头位于矩形的四个角点上。
进一步地,所述主摄像头与所述同步信号线连接的引脚作为同步信号输出端,各个所述副摄像头与所述同步信号线连接的引脚作为同步信号输入端,且各摄像头的帧率相同。
本发明还提出一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上述的图像处理方法。
本发明还提出一种终端,包括存储器、处理器以及存储在存储器上并可在 处理器上运行的计算机程序,所述终端上设有一主摄像头及至少两个副摄像头,所有摄像头通过一同步信号线连接,且所述主摄像头的拍摄区域等于或包含所述副摄像头的拍摄区域,所述处理器执行所述程序时实现如上述的方法。
可以理解的,所述终端包括但不限于手机、电脑、平板、智能电视、安防设备、智能穿戴设备等。
本领域技术人员可以理解,在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,“计算机可读介质”可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。
计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或它们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描 述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。
本发明实施例还揭示了:
A1.一种图像处理方法,应用于终端,所述终端上设有一主摄像头及至少两个副摄像头,所有摄像头通过一同步信号线连接,且所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述方法包括:
获取所述主摄像头拍摄的主摄图像,并分别获取每个所述副摄像头同步拍摄的副摄图像;
依次将每张所述副摄图像与一张所述主摄图像进行图像合成,且每次合成均按照第一预设规则选定其中一张图像作为基准图像,以合成多张一次优化图像;
根据第二预设规则,将多张所述一次优化图像进行图像合成,以合成一张二次优化图像。
A2.根据A1所述的图像处理方法,所述一次优化图像为深度图像。
A3.根据A2所述的图像处理方法,所述将多张所述一次优化图像进行图像合成的步骤包括:
按照所述第二预设规则,选定一张所述一次优化图像作为第一基准图像;
获取所述第一基准图像中深度信息异常的目标图像区域;
从其它所述一次优化图像当中获取与所述目标图像区域相同区域的目标深度信息,并利用所述目标深度信息来校正所述目标图像区域的深度信息。
A4.根据A3所述的图像处理方法,所述第二预设规则包括以下规则当中的任意一种:
优先选择含异常深度信息最少的图像的规则;
优先选择视场最大的图像的规则;
优先选择视场最小的图像的规则;
优先选择特定两个摄像头拍摄图像的合成图像;或
优先选择图像质量最高的图像的规则。
A5.根据A1所述的图像处理方法,所述主摄像头为第一彩色摄像头,所有所述副摄像头当中至少包括一黑白摄像头及一第二彩色摄像头,所述第一彩色摄像头和所述第二彩色摄像头的等效焦距存在差异。
A6.根据A5所述的图像处理方法,将所述第一彩色摄像头和所述黑白摄像头拍摄的图像合成所述一次优化图像的步骤包括:
按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述黑白摄像头拍摄的第二亮度信号图像当中选定一张图像作为基准图像;
将所述第一彩色图像拆分成第一亮度信号图像和第一色度信号图像;
基于选定的基准图像,将所述第一亮度信号图像和所述第二亮度信号图像进行降噪合成,以得到降噪后的第三亮度信号图像;
将所述第一色度信号图像与所述第三亮度信号图像合成,得到降噪后的所述一次优化图像。
A7.根据A5所述的图像处理方法,将所述第一彩色摄像头和所述第二彩色摄像头拍摄的图像合成所述一次优化图像的步骤包括:
按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述第二彩色摄像头拍摄的第二彩色图像当中选定一张图像作为基准图像;
根据所述第一彩色摄像头和所述第二彩色摄像头的焦距大小,确定所述第一彩色图像和所述第二彩色图像之间的主体和背景关系;
基于选定的基准图像及确定的所述主体和背景关系,将所述第一彩色图像和所述第二彩色图像进行选择性虚化合成,以得到清晰的所述一次优化图像。
A8.根据A1所述的图像处理方法,所述终端上各摄像头的布置方式为以下情况当中的任意一种:
当所述终端上设有两个所述副摄像头时,所述主摄像头与两个所述副摄像头的连线相互垂直;
当所述终端上设有三个所述副摄像头时,所有摄像头呈矩形排列布置,且四个摄像头位于矩形的四个角点上。
A9.根据A1所述的图像处理方法,所述主摄像头与所述同步信号线连接的引脚作为同步信号输出端,各个所述副摄像头与所述同步信号线连接的引脚作为同步信号输入端,且各摄像头的帧率相同。
B10.一种图像处理系统,应用于终端,所述终端上设有一主摄像头及至少 两个副摄像头,所有摄像头通过一同步信号线连接,且所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述系统包括:
图像获取模块,用于获取所述主摄像头拍摄的主摄图像,并分别获取每个所述副摄像头同步拍摄的副摄图像;
一次合成模块,用于依次将每张所述副摄图像与一张所述主摄图像进行图像合成,且每次合成均按照第一预设规则选定其中一张图像作为基准图像,以合成多张一次优化图像;
二次合成模块,用于根据第二预设规则,将多张所述一次优化图像进行图像合成,以合成一张二次优化图像。
B11.根据B10所述的图像处理系统,所述一次优化图像为深度图像。
B12.根据B11所述的图像处理系统,所述二次合成模块包括:
基准选定单元,用于按照所述第二预设规则,选定一张所述一次优化图像作为第一基准图像;
异常获取单元,用于获取所述第一基准图像中深度信息异常的目标图像区域;
深度校准单元,用于从其它所述一次优化图像当中获取与所述目标图像区域相同区域的目标深度信息,并利用所述目标深度信息来校正所述目标图像区域的深度信息。
B13.根据B12所述的图像处理系统,所述第二预设规则包括以下规则当中的任意一种:
优先选择含异常深度信息最少的图像的规则;
优先选择视场最大的图像的规则;
优先选择视场最小的图像的规则;
优先选择特定两个摄像头拍摄图像的合成图像;或
优先选择图像质量最高的图像的规则。
B14.根据B10所述的图像处理系统,所述主摄像头为第一彩色摄像头,所有所述副摄像头当中至少包括一黑白摄像头及一第二彩色摄像头,所述第一彩色摄像头和所述第二彩色摄像头的等效焦距存在差异。
B15.根据B14所述的图像处理系统,所述一次合成模块包括:
第一选定单元,用于按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述黑白摄像头拍摄的第二亮度信号图像当中选定一张图 像作为基准图像;
图像拆分单元,用于将所述第一彩色图像拆分成第一亮度信号图像和第一色度信号图像;
第一合成单元,用于将所述第一亮度信号图像和所述第二亮度信号图像进行降噪合成,以得到降噪后的第三亮度信号图像;
第二合成单元,用于将所述第一色度信号图像与第三亮度信号图像合成,得到降噪后的所述一次优化图像。
B16.根据B14所述的图像处理系统,所述一次合成模块还包括:
第二选定单元,用于按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述第二彩色摄像头拍摄的第二彩色图像当中选定一张图像作为基准图像;
关系确定单元,用于根据所述第一彩色摄像头和所述第二彩色摄像头的焦距大小,确定所述第一彩色图像和所述第二彩色图像之间的主体和背景关系;
图像虚化单元,用于基于选定的基准图像及确定的所述主体和背景关系,将所述第一彩色图像和所述第二彩色图像进行选择性虚化合成,以得到清晰的所述一次优化图像。
B17.根据B10所述的图像处理系统,所述终端上各摄像头的布置方式为以下情况当中的任意一种:
当所述终端上设有两个所述副摄像头时,所述主摄像头与两个所述副摄像头的连线相互垂直;
当所述终端上设有三个所述副摄像头时,所有摄像头呈矩形排列布置,且四个摄像头位于矩形的四个角点上。
B18.根据B10所述的图像处理系统,所述主摄像头与所述同步信号线连接的引脚作为同步信号输出端,各个所述副摄像头与所述同步信号线连接的引脚作为同步信号输入端,且各摄像头的帧率相同。
C19.一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如A1-A9任一所述的方法。
D20.一种终端,包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述终端上设有一主摄像头及至少两个副摄像头,所有摄像头通过一同步信号线连接,且所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述处理器执行所述程序时实现如A1-A9任一所述的方法。
Claims (20)
- 一种图像处理方法,应用于终端,其中,所述终端上设有一主摄像头及至少两个副摄像头,所有摄像头通过一同步信号线连接,且所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述方法包括:获取所述主摄像头拍摄的主摄图像,并分别获取每个所述副摄像头同步拍摄的副摄图像;依次将每张所述副摄图像与一张所述主摄图像进行图像合成,且每次合成均按照第一预设规则选定其中一张图像作为基准图像,以合成多张一次优化图像;根据第二预设规则,将多张所述一次优化图像进行图像合成,以合成一张二次优化图像。
- 根据权利要求1所述的图像处理方法,其中,所述一次优化图像为深度图像。
- 根据权利要求2所述的图像处理方法,其中,所述将多张所述一次优化图像进行图像合成的步骤包括:按照所述第二预设规则,选定一张所述一次优化图像作为第一基准图像;获取所述第一基准图像中深度信息异常的目标图像区域;从其它所述一次优化图像当中获取与所述目标图像区域相同区域的目标深度信息,并利用所述目标深度信息来校正所述目标图像区域的深度信息。
- 根据权利要求3所述的图像处理方法,其中,所述第二预设规则包括以下规则当中的任意一种:优先选择含异常深度信息最少的图像的规则;优先选择视场最大的图像的规则;优先选择视场最小的图像的规则;优先选择特定两个摄像头拍摄图像的合成图像;或优先选择图像质量最高的图像的规则。
- 根据权利要求1所述的图像处理方法,其中,所述主摄像头为第一彩色摄像头,所有所述副摄像头当中至少包括一黑白摄像头及一第二彩色摄像头,所述第一彩色摄像头和所述第二彩色摄像头的等效焦距存在差异。
- 根据权利要求5所述的图像处理方法,其中,将所述第一彩色摄像头和所 述黑白摄像头拍摄的图像合成所述一次优化图像的步骤包括:按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述黑白摄像头拍摄的第二亮度信号图像当中选定一张图像作为基准图像;将所述第一彩色图像拆分成第一亮度信号图像和第一色度信号图像;基于选定的基准图像,将所述第一亮度信号图像和所述第二亮度信号图像进行降噪合成,以得到降噪后的第三亮度信号图像;将所述第一色度信号图像与所述第三亮度信号图像合成,得到降噪后的所述一次优化图像。
- 根据权利要求5所述的图像处理方法,其中,将所述第一彩色摄像头和所述第二彩色摄像头拍摄的图像合成所述一次优化图像的步骤包括:按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述第二彩色摄像头拍摄的第二彩色图像当中选定一张图像作为基准图像;根据所述第一彩色摄像头和所述第二彩色摄像头的焦距大小,确定所述第一彩色图像和所述第二彩色图像之间的主体和背景关系;基于选定的基准图像及确定的所述主体和背景关系,将所述第一彩色图像和所述第二彩色图像进行选择性虚化合成,以得到清晰的所述一次优化图像。
- 根据权利要求1所述的图像处理方法,其中,所述终端上各摄像头的布置方式为以下情况当中的任意一种:当所述终端上设有两个所述副摄像头时,所述主摄像头与两个所述副摄像头的连线相互垂直;当所述终端上设有三个所述副摄像头时,所有摄像头呈矩形排列布置,且四个摄像头位于矩形的四个角点上。
- 根据权利要求1所述的图像处理方法,其中,所述主摄像头与所述同步信号线连接的引脚作为同步信号输出端,各个所述副摄像头与所述同步信号线连接的引脚作为同步信号输入端,且各摄像头的帧率相同。
- 一种图像处理系统,应用于终端,其中,所述终端上设有一主摄像头及至少两个副摄像头,所有摄像头通过一同步信号线连接,且所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述系统包括:图像获取模块,用于获取所述主摄像头拍摄的主摄图像,并分别获取每个所述副摄像头同步拍摄的副摄图像;一次合成模块,用于依次将每张所述副摄图像与一张所述主摄图像进行图 像合成,且每次合成均按照第一预设规则选定其中一张图像作为基准图像,以合成多张一次优化图像;二次合成模块,用于根据第二预设规则,将多张所述一次优化图像进行图像合成,以合成一张二次优化图像。
- 根据权利要求10所述的图像处理系统,其中,所述一次优化图像为深度图像。
- 根据权利要求11所述的图像处理系统,其中,所述二次合成模块包括:基准选定单元,用于按照所述第二预设规则,选定一张所述一次优化图像作为第一基准图像;异常获取单元,用于获取所述第一基准图像中深度信息异常的目标图像区域;深度校准单元,用于从其它所述一次优化图像当中获取与所述目标图像区域相同区域的目标深度信息,并利用所述目标深度信息来校正所述目标图像区域的深度信息。
- 根据权利要求12所述的图像处理系统,其中,所述第二预设规则包括以下规则当中的任意一种:优先选择含异常深度信息最少的图像的规则;优先选择视场最大的图像的规则;优先选择视场最小的图像的规则;优先选择特定两个摄像头拍摄图像的合成图像;或优先选择图像质量最高的图像的规则。
- 根据权利要求10所述的图像处理系统,其中,所述主摄像头为第一彩色摄像头,所有所述副摄像头当中至少包括一黑白摄像头及一第二彩色摄像头,所述第一彩色摄像头和所述第二彩色摄像头的等效焦距存在差异。
- 根据权利要求14所述的图像处理系统,其中,所述一次合成模块包括:第一选定单元,用于按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述黑白摄像头拍摄的第二亮度信号图像当中选定一张图像作为基准图像;图像拆分单元,用于将所述第一彩色图像拆分成第一亮度信号图像和第一色度信号图像;第一合成单元,用于将所述第一亮度信号图像和所述第二亮度信号图像进 行降噪合成,以得到降噪后的第三亮度信号图像;第二合成单元,用于将所述第一色度信号图像与第三亮度信号图像合成,得到降噪后的所述一次优化图像。
- 根据权利要求14所述的图像处理系统,其中,所述一次合成模块还包括:第二选定单元,用于按照所述第一预设规则,从所述第一彩色摄像头拍摄的第一彩色图像,和所述第二彩色摄像头拍摄的第二彩色图像当中选定一张图像作为基准图像;关系确定单元,用于根据所述第一彩色摄像头和所述第二彩色摄像头的焦距大小,确定所述第一彩色图像和所述第二彩色图像之间的主体和背景关系;图像虚化单元,用于基于选定的基准图像及确定的所述主体和背景关系,将所述第一彩色图像和所述第二彩色图像进行选择性虚化合成,以得到清晰的所述一次优化图像。
- 根据权利要求10所述的图像处理系统,其中,所述终端上各摄像头的布置方式为以下情况当中的任意一种:当所述终端上设有两个所述副摄像头时,所述主摄像头与两个所述副摄像头的连线相互垂直;当所述终端上设有三个所述副摄像头时,所有摄像头呈矩形排列布置,且四个摄像头位于矩形的四个角点上。
- 根据权利要求10所述的图像处理系统,其中,所述主摄像头与所述同步信号线连接的引脚作为同步信号输出端,各个所述副摄像头与所述同步信号线连接的引脚作为同步信号输入端,且各摄像头的帧率相同。
- 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现如权利要求1-9任一所述的方法。
- 一种终端,包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,其中,所述终端上设有一主摄像头及至少两个副摄像头,所有摄像头通过一同步信号线连接,且所述主摄像头与每个所述副摄像头之间均具有共同的拍摄区域,所述处理器执行所述程序时实现如权利要求1-9任一所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810747463.2 | 2018-07-09 | ||
CN201810747463.2A CN109064415A (zh) | 2018-07-09 | 2018-07-09 | 图像处理方法、系统、可读存储介质及终端 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020011112A1 true WO2020011112A1 (zh) | 2020-01-16 |
Family
ID=64819163
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/094934 WO2020011112A1 (zh) | 2018-07-09 | 2019-07-05 | 图像处理方法、系统、可读存储介质及终端 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109064415A (zh) |
WO (1) | WO2020011112A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116051361A (zh) * | 2022-06-30 | 2023-05-02 | 荣耀终端有限公司 | 图像维测数据的处理方法及装置 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109064415A (zh) * | 2018-07-09 | 2018-12-21 | 奇酷互联网络科技(深圳)有限公司 | 图像处理方法、系统、可读存储介质及终端 |
CN110312056B (zh) * | 2019-06-10 | 2021-09-14 | 青岛小鸟看看科技有限公司 | 一种同步曝光方法和图像采集设备 |
CN110611765B (zh) * | 2019-08-01 | 2021-10-15 | 深圳市道通智能航空技术股份有限公司 | 一种相机成像方法、相机系统及无人机 |
CN110620873B (zh) * | 2019-08-06 | 2022-02-22 | RealMe重庆移动通信有限公司 | 设备成像方法、装置、存储介质及电子设备 |
CN113518172B (zh) * | 2020-03-26 | 2023-06-20 | 华为技术有限公司 | 图像处理方法和装置 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105827754A (zh) * | 2016-03-24 | 2016-08-03 | 维沃移动通信有限公司 | 一种高动态范围图像的生成方法及移动终端 |
CN108024054A (zh) * | 2017-11-01 | 2018-05-11 | 广东欧珀移动通信有限公司 | 图像处理方法、装置及设备 |
CN109064415A (zh) * | 2018-07-09 | 2018-12-21 | 奇酷互联网络科技(深圳)有限公司 | 图像处理方法、系统、可读存储介质及终端 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156987A (zh) * | 2011-04-25 | 2011-08-17 | 深圳超多维光电子有限公司 | 获取场景深度信息的方法及装置 |
JP5843751B2 (ja) * | 2012-12-27 | 2016-01-13 | 株式会社ソニー・コンピュータエンタテインメント | 情報処理装置、情報処理システム、および情報処理方法 |
CN105160663A (zh) * | 2015-08-24 | 2015-12-16 | 深圳奥比中光科技有限公司 | 获取深度图像的方法和系统 |
CN106210524B (zh) * | 2016-07-29 | 2019-03-19 | 信利光电股份有限公司 | 一种摄像模组的拍摄方法及摄像模组 |
CN106993136B (zh) * | 2017-04-12 | 2021-06-15 | 深圳市知赢科技有限公司 | 移动终端及其基于多摄像头的图像降噪方法和装置 |
CN107800827B (zh) * | 2017-11-10 | 2024-05-07 | 信利光电股份有限公司 | 一种多摄像头模组的拍摄方法和多摄像头模组 |
CN107819992B (zh) * | 2017-11-28 | 2020-10-02 | 信利光电股份有限公司 | 一种三摄像头模组及电子设备 |
-
2018
- 2018-07-09 CN CN201810747463.2A patent/CN109064415A/zh not_active Withdrawn
-
2019
- 2019-07-05 WO PCT/CN2019/094934 patent/WO2020011112A1/zh active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105827754A (zh) * | 2016-03-24 | 2016-08-03 | 维沃移动通信有限公司 | 一种高动态范围图像的生成方法及移动终端 |
CN108024054A (zh) * | 2017-11-01 | 2018-05-11 | 广东欧珀移动通信有限公司 | 图像处理方法、装置及设备 |
CN109064415A (zh) * | 2018-07-09 | 2018-12-21 | 奇酷互联网络科技(深圳)有限公司 | 图像处理方法、系统、可读存储介质及终端 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116051361A (zh) * | 2022-06-30 | 2023-05-02 | 荣耀终端有限公司 | 图像维测数据的处理方法及装置 |
CN116051361B (zh) * | 2022-06-30 | 2023-10-24 | 荣耀终端有限公司 | 图像维测数据的处理方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN109064415A (zh) | 2018-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020011112A1 (zh) | 图像处理方法、系统、可读存储介质及终端 | |
CN107948519B (zh) | 图像处理方法、装置及设备 | |
US10897609B2 (en) | Systems and methods for multiscopic noise reduction and high-dynamic range | |
US9288392B2 (en) | Image capturing device capable of blending images and image processing method for blending images thereof | |
CN109598673A (zh) | 图像拼接方法、装置、终端及计算机可读存储介质 | |
CN111932587B (zh) | 图像处理方法和装置、电子设备、计算机可读存储介质 | |
CN107911682B (zh) | 图像白平衡处理方法、装置、存储介质和电子设备 | |
US10489885B2 (en) | System and method for stitching images | |
US11184553B1 (en) | Image signal processing in multi-camera system | |
EP3891974B1 (en) | High dynamic range anti-ghosting and fusion | |
WO2020029679A1 (zh) | 控制方法、装置、成像设备、电子设备及可读存储介质 | |
JP2019533957A (ja) | 端末のための撮影方法及び端末 | |
CN105120247A (zh) | 一种白平衡调整方法及电子设备 | |
CN112991245B (zh) | 双摄虚化处理方法、装置、电子设备和可读存储介质 | |
US11503223B2 (en) | Method for image-processing and electronic device | |
WO2020215180A1 (zh) | 图像处理方法、装置和电子设备 | |
KR101915036B1 (ko) | 실시간 비디오 스티칭 방법, 시스템 및 컴퓨터 판독 가능 기록매체 | |
US20190355101A1 (en) | Image refocusing | |
US11032463B2 (en) | Image capture apparatus and control method thereof | |
CN112104796B (zh) | 图像处理方法和装置、电子设备、计算机可读存储介质 | |
JP2019075716A (ja) | 画像処理装置、画像処理方法、及びプログラム | |
CN109447925B (zh) | 图像处理方法和装置、存储介质、电子设备 | |
CN113014811A (zh) | 图像处理装置及方法、设备、存储介质 | |
WO2020244194A1 (zh) | 一种获得浅景深图像的方法及系统 | |
US12062161B2 (en) | Area efficient high dynamic range bandwidth compression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19834439 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19834439 Country of ref document: EP Kind code of ref document: A1 |