Nothing Special   »   [go: up one dir, main page]

CN113298829B - Image processing method, device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113298829B
CN113298829B CN202110659083.5A CN202110659083A CN113298829B CN 113298829 B CN113298829 B CN 113298829B CN 202110659083 A CN202110659083 A CN 202110659083A CN 113298829 B CN113298829 B CN 113298829B
Authority
CN
China
Prior art keywords
image
background
region
processed
object region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110659083.5A
Other languages
Chinese (zh)
Other versions
CN113298829A (en
Inventor
内山寛之
刘锴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110659083.5A priority Critical patent/CN113298829B/en
Publication of CN113298829A publication Critical patent/CN113298829A/en
Application granted granted Critical
Publication of CN113298829B publication Critical patent/CN113298829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a computer readable storage medium. The method comprises the following steps: identifying a target object contained in an image to be processed, and generating a first object region image, wherein the first object region image is used for describing an initial object region of the target object in the image to be processed; determining background attribute information corresponding to the image to be processed according to the image to be processed and the first object area image; and correcting the first object region image based on the background attribute information to obtain a second object region image, wherein the second object region image is used for describing the corrected object region of the target object in the image to be processed. The image processing method, the image processing device, the electronic equipment and the computer readable storage medium can determine the object area of the target object in the image more accurately.

Description

Image processing method, device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image technology, and in particular, to an image processing method, an image processing device, an electronic device, and a computer readable storage medium.
Background
In the field of image technology, it is a common image processing procedure to separate a foreground region and a background region in an image, where the foreground region is usually a region where a target object of interest is located in the image, and the background region is a region other than the target object. By separating the foreground region from the background region, further image processing of the foreground region and/or the background region in the image can be facilitated. If the foreground area and the background area determined in the image are not accurate enough, the subsequent image processing effect can be affected. Therefore, how to determine the region of the target object in the image more accurately is a technical problem to be solved.
Disclosure of Invention
The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a computer readable storage medium, which can more accurately determine an object region of a target object in an image.
The embodiment of the application discloses an image processing method, which comprises the following steps:
identifying a target object contained in an image to be processed, and generating a first object region image, wherein the first object region image is used for describing an initial object region of the target object in the image to be processed;
Determining background attribute information corresponding to the image to be processed according to the image to be processed and the first object area image;
and correcting the first object region image based on the background attribute information to obtain a second object region image, wherein the second object region image is used for describing the corrected object region of the target object in the image to be processed.
The embodiment of the application discloses an image processing device, including:
the identification module is used for identifying a target object contained in an image to be processed and generating a first object area image, wherein the first object area image is used for describing an initial object area of the target object in the image to be processed;
the background information determining module is used for determining background attribute information corresponding to the image to be processed according to the image to be processed and the first object area image;
the correction module is used for correcting the first object area image based on the background attribute information to obtain a second object area image, and the second object area image is used for describing an object area of the target object after correction in the image to be processed.
The embodiment of the application discloses electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the computer program is executed by the processor to enable the processor to realize the method.
The present embodiments disclose a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described above.
According to the image processing method, the device, the electronic equipment and the computer readable storage medium disclosed by the embodiment of the application, a target object contained in an image to be processed is identified, a first object area image is generated, the first object area image is used for describing an initial object area of the target object in the image to be processed, background attribute information corresponding to the image to be processed is determined according to the image to be processed and the object area image, the first object area image is corrected based on the background attribute information, a second object area image is obtained, the initial object area of the target object can be corrected based on the background belonging information of the image to be processed, so that a more accurate and fine corrected object area is obtained, the object area of the target object in the image to be processed can be accurately determined by utilizing the second object area image, and the image processing effect of the subsequent image to be processed in the process of foreground, background separation and the like can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of an image processing circuit in one embodiment;
FIG. 2 is a flow chart of an image processing method in one embodiment;
FIG. 3A is a schematic diagram of generating a first object region image in one embodiment;
FIG. 3B is a schematic diagram of generating a first object region image in another embodiment;
FIG. 4 is a flow diagram of computing a background complexity image in one embodiment;
FIG. 5 is a schematic diagram of computing background complexity in one embodiment;
FIG. 6 is a schematic diagram of fusing a first object area image before etching with a first object area image after etching in one embodiment;
FIG. 7 is a schematic diagram of adjusting a tone curve based on background complexity in one embodiment;
FIG. 8 is a schematic diagram of generating a background overexposed image in one embodiment;
FIG. 9 is a schematic diagram of a background overexposed image obtained in one embodiment;
FIG. 10 is a flowchart of an image processing method in another embodiment;
FIG. 11 is a schematic diagram of blurring an image to be processed based on a second object region image in one embodiment;
FIG. 12 is a block diagram of an image processing apparatus in one embodiment;
fig. 13 is a block diagram of an electronic device in one embodiment.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments and figures herein are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It will be understood that the terms "first," "second," and the like, as used herein, may be used to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element. For example, a first object region image may be referred to as a second object region image, and similarly, a second object region image may be referred to as a first object region image, without departing from the scope of the present application. Both the first object region image and the second object region image are object region images, but they are not the same object region image.
The embodiment of the application provides electronic equipment. The electronic device includes image processing circuitry, which may be implemented using hardware and/or software components, and may include various processing units defining an ISP (Image Signal Processing ) pipeline. FIG. 1 is a block diagram of an image processing circuit in one embodiment. For ease of illustration, fig. 1 illustrates only aspects of image processing techniques related to embodiments of the present application.
As shown in fig. 1, the image processing circuit includes an ISP processor 140 and a control logic 150. Image data captured by imaging device 110 is first processed by ISP processor 140, where ISP processor 140 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of imaging device 110. Imaging device 110 may include one or more lenses 112 and an image sensor 114. The image sensor 114 may include a color filter array (e.g., bayer filters), and the image sensor 114 may acquire light intensity and wavelength information captured by each imaging pixel and provide a set of raw image data that may be processed by the ISP processor 140. The attitude sensor 120 (e.g., tri-axis gyroscope, hall sensor, accelerometer, etc.) may provide acquired image processing parameters (e.g., anti-shake parameters) to the ISP processor 140 based on the type of attitude sensor 120 interface. The attitude sensor 120 interface may employ an SMIA (Standard Mobile Imaging Architecture ) interface, other serial or parallel camera interfaces, or a combination of the above.
It should be noted that, although only one imaging device 110 is shown in fig. 1, in the embodiment of the present application, at least two imaging devices 110 may be included, where each imaging device 110 may correspond to one image sensor 114, or a plurality of imaging devices 110 may correspond to one image sensor 114, which is not limited herein. The operation of each imaging device 110 may be as described above.
In addition, the image sensor 114 may also send raw image data to the gesture sensor 120, the gesture sensor 120 may provide raw image data to the ISP processor 140 based on the gesture sensor 120 interface type, or the gesture sensor 120 may store raw image data in the image memory 130.
The ISP processor 140 processes the raw image data on a pixel-by-pixel basis in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 140 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 140 may also receive image data from image memory 130. For example, the gesture sensor 120 interface sends the raw image data to the image memory 130, where the raw image data in the image memory 130 is provided to the ISP processor 140 for processing. Image memory 130 may be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and may include DMA (Direct Memory Access ) features.
Upon receiving raw image data from the image sensor 114 interface or from the pose sensor 120 interface or from the image memory 130, the ISP processor 140 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 130 for additional processing before being displayed. The ISP processor 140 receives the processing data from the image memory 130 and performs image data processing in the original domain and in the RGB and YCbCr color spaces on the processing data. The image data processed by ISP processor 140 may be output to display 160 for viewing by a user and/or further processing by a graphics engine or GPU (Graphics Processing Unit, graphics processor). In addition, the output of ISP processor 140 may also be sent to image memory 130, and display 160 may read image data from image memory 130. In one embodiment, image memory 130 may be configured to implement one or more frame buffers.
The statistics determined by ISP processor 140 may be sent to control logic 150. For example, the statistics may include image sensor 114 statistics such as vibration frequency of gyroscope, auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 112 shading correction, etc. Control logic 150 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 110 and control parameters of ISP processor 140 based on the received statistics. For example, the control parameters of the imaging device 110 may include attitude sensor 120 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, camera anti-shake displacement parameters, lens 112 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balancing and color adjustment (e.g., during RGB processing), as well as lens 112 shading correction parameters.
The image processing method provided in the embodiment of the present application will be described with reference to the image processing circuit of fig. 1. The ISP processor 140 may acquire a to-be-processed image from the imaging device 110 or the image memory 130, may identify a target object contained in the to-be-processed image, and generate a first object region image for describing an initial object region of the target object in the to-be-processed image. The ISP processor 140 may determine background attribute information corresponding to the image to be processed according to the image to be processed and the first object region image, and correct the first object region image based on the background attribute information to obtain a second object region image, where the second object region image is used to describe an object region of the target object after correction in the image to be processed.
In some embodiments, the ISP processor 140 obtains a second object region image from which the object region in the image to be processed can be precisely determined, and uses the second object region image to separate the foreground region from the background region of the image to be processed. Alternatively, the separated background area or foreground area may be subjected to image processing, such as blurring the background area, beautifying the foreground area (e.g., brightness enhancement, portrait whitening, defogging, etc.), etc., but the present invention is not limited thereto. The ISP processor 140 may send the processed image to the image memory 130 for storage, or may send the processed image to the display 160 for display, so as to facilitate the user to observe the processed image through the display 160.
As shown in fig. 2, in one embodiment, an image processing method is provided, which may be applied to the above electronic device, and the electronic device may include, but is not limited to, a mobile phone, a smart wearable device, a tablet computer, a PC (Personal Computer, a personal computer), a vehicle-mounted terminal, a digital camera, and the like. The image processing method may include the steps of:
step 210, identifying a target object contained in the image to be processed, and generating a first object region image.
The image to be processed may be an image in which the object region of the foreground is separated from the background region. The image to be processed may include a target object, where the target object may be an object of interest in the image to be processed, for example, the image to be processed may be a portrait in a portrait image, the image to be processed may be an animal image, and the target object may be an animal in the animal image, etc.; alternatively, the target object may be a portion of the object of interest in the image to be processed, for example, the image to be processed may be a portrait, and the target object may be portrait hair, which is not limited herein.
The image to be processed may be a color image, for example, an image in RGB (Red Green Blue) format or an image in YUV (Y represents brightness, U and V represent chromaticity) format, or the like. The image to be processed can be an image pre-stored in a memory of the electronic device, or can be an image acquired by the electronic device through a camera in real time.
In embodiments of the present application, the electronic device may identify the target object included in the image to be processed in a variety of different manners to generate a first object region image, which may be used to describe an initial object region of the target object in the image to be processed. As an implementation manner, the difference between the depth information corresponding to the foreground object area and the background area in the image to be processed is larger, the depth information can be used for representing the distance between the photographed object and the camera, and the larger the depth information is, the farther the distance can be represented. Therefore, the object region and the background region in the image to be processed may be divided by using the depth information corresponding to each pixel in the image to be processed, for example, the background region may be a region composed of pixels whose depth information is greater than the first threshold, and the object region may be a region composed of pixels whose depth information is less than the second threshold, or the like.
As another embodiment, the electronic device may also generate the first object area image by using a neural network, and may input the image to be processed into an object segmentation model trained in advance, and identify a target object included in the image to be processed through the object segmentation model, so as to obtain the first object area image. The image processing model may be trained from a plurality of sets of sample training images, each set of sample training images may include sample images, each sample image may be labeled with an object region. The image segmentation model may include, but is not limited to, a network based on a deelab semantic segmentation algorithm, a U-Net network structure, an FCN (Fully Convolutional Networks, full convolutional neural network), etc., without limitation.
In other embodiments, the electronic device may also generate the first object region image in other ways, for example, the electronic device may extract image features of the image to be processed and analyze the image features to determine the object region. Alternatively, the image features may include, but are not limited to, edge features, color features, location features, and the like. The embodiment of the present application does not limit a specific manner of generating the first object area image.
In one embodiment, the first object region image may be a gray scale image, each pixel point in the first object region image may be respectively corresponding to a probability value of 0 to 1, where the greater the probability value, the higher the probability value indicates that the pixel point belongs to the object region, and the region formed by the pixel points in the first object region image whose probability value is greater than a probability threshold (for example, a value of 0.5, 0.6, 0.8, etc.) may be determined as the initial object region.
Further, the first object region may be a binarized object mask, which may represent an object region (i.e., a foreground region) in the image to be processed with a first color, may represent a region (i.e., a background region) other than the object region in the image to be processed with a second color, for example, represent the object region with black, represent the background region with white, or the like, but is not limited thereto.
FIG. 3A is a schematic diagram of generating a first object region image in one embodiment. As shown in fig. 3A, the image to be processed 310 is a portrait image, the target object is a portrait, and the portrait in the image to be processed 310 may be identified to obtain a first object area image 320, where the first object area image 320 may include portrait area information in the image to be processed 310.
Fig. 3B is a schematic diagram of generating an image of a first object region in another embodiment. As shown in fig. 3B, the image to be processed 310 is a portrait image, the target object is the hair of the portrait, and the hair in the image to be processed 310 may be identified to obtain a first object area image 330, and the first object area image 330 may include the hair area information in the image to be processed 310.
Step 220, determining background attribute information corresponding to the image to be processed according to the image to be processed and the first object region image.
Background attribute information may refer to related information characterizing attributes of a background region of an image to be processed, which may include, but is not limited to, one or more of background complexity, background exposure, background color information, etc., where background complexity may be used to describe the complexity of a background region in an image to be processed, the more abundant the image features the background region contains, the higher the corresponding complexity.
According to the first object area image, an initial object area and a background area in the image to be processed can be determined, and according to the image characteristics of the background area, background attribute information corresponding to the image to be processed can be determined.
And step 230, correcting the first object area image based on the background attribute information to obtain a second object area image.
The initial object area in the first object area image is an object area which is obtained primarily after the target object in the image to be processed is identified, and is rough and possibly inaccurate, so that in the embodiment of the application, the first object area image can be corrected based on the background attribute information corresponding to the image to be processed, and the initial object area in the first object area image is optimized to obtain a more accurate and fine second object area image, wherein the second object area image can be used for describing the corrected object area of the target object in the image to be processed.
In some embodiments, correcting the first object region image may include, but is not limited to, erosion processing, enhancement processing, padding processing, etc., to optimize an initial object region in the first object region image, reducing the likelihood that an image region not belonging to the target object is contained in the object region. After the electronic equipment obtains the second object region image, the object region and the background region can be accurately separated based on the second object region image, so that the accuracy of image separation is improved.
After separating the object region from the background region in the image to be processed, the separated object region and/or background region may be further processed. For example, blurring processing may be performed on the background area, brightness of the adjustment target area, white balance parameters of the adjustment target area, and the like, and the image processing performed after separation is not limited in the embodiment of the present application.
In the embodiment of the application, the target object contained in the image to be processed is identified, and the first object region image is generated, wherein the first object region image is used for describing the initial object region of the target object in the image to be processed, the background attribute information corresponding to the image to be processed is determined according to the image to be processed and the object region image, the first object region image is corrected based on the background attribute information, so that the second object region image is obtained, the initial object region of the target object can be corrected based on the background belonging information of the image to be processed, so that a more accurate and fine corrected object region is obtained, the object region of the target object in the image to be processed can be accurately determined by utilizing the second object region image, and the image processing effect of the subsequent image to be processed in the image processing such as foreground and background separation can be improved.
In one embodiment, the background attribute information may include a background complexity image, which may include a background complexity of the image to be processed. As shown in fig. 4, the step of determining the background attribute information corresponding to the image to be processed according to the image to be processed and the first object region image may include the following steps:
step 402, performing edge detection on the image to be processed to obtain a first edge image.
The electronic equipment can adopt a Canny edge detection operator, a Laplacian detection operator, a DoG detection operator, a Sopher detection operator and the like to carry out edge detection on the image to be processed, so as to obtain a first edge image containing all edge information in the image to be processed. It should be noted that the embodiment of the present application is not limited to a specific edge detection algorithm.
In some embodiments, the electronic device may first obtain a gray image corresponding to the image to be processed, and perform edge detection on the gray image to obtain a first edge image. Wherein the gray image is an image having only one sampling color per pixel, and the gray image is displayed as a gray from black to white. The memory of the electronic device may store the gray image corresponding to the image to be processed in advance, and as another embodiment, the electronic device may convert the image to be processed from RGB format or YUV format to the gray image after acquiring the image to be processed.
And step 404, removing the edge of the target object in the first edge image according to the first object area image to obtain a second edge image.
The object region in the first edge image can be determined according to the first object region image, and the edge of the object region in the first edge image is removed, so that a second edge image with the edge except the object region reserved is obtained. The edges of the target object in the first edge image are removed, so that the situation that the background complexity calculation is inaccurate due to the fact that the edges of the target object affect the edges of the background area can be prevented. The method and the device aim at the scheme for accurately positioning the object area, so that the edges of the target object in the first edge image are removed by using the first object area image, the calculated background complexity is more accurate, and the scheme for accurately positioning the object area is attached to the object area.
Step 406, performing dilation and blurring on the second edge image to obtain a background complexity image.
The electronic equipment can perform expansion processing and blurring processing on the edges in the second edge image so as to enlarge the edges in the second edge image, enable edge characteristics to be more obvious and improve accuracy of background complexity calculation. The dilation process is a local maximum operation, and the kernel is used to convolve with the edge in the second edge image, and the pixel covered by the kernel is calculated, so that the edge grows. The blurring processing may be performed by using gaussian blurring, mean blurring, median blurring, etc., and the specific expansion processing and blurring processing are not limited in the embodiments of the present application.
The background complexity can be calculated according to the second edge image after the expansion processing and the blurring processing, and a corresponding background complexity image is obtained. As a specific embodiment, the background complexity may be calculated according to the edges in the background area in the second edge image after the dilation process and the blurring process, where the background area including more edges corresponds to a higher background complexity and the background area including fewer edges corresponds to a lower background complexity.
FIG. 5 is a schematic diagram of computing background complexity in one embodiment. As shown in fig. 5, the electronic device may perform edge detection on the image to be processed 510 to obtain a first edge image 520, and then remove edges belonging to the portrait area in the first edge image 520 by using the first object area image 530 to obtain a second edge image 540, where edges except the portrait area remain in the second edge image 540. The second edge image 540 may be subjected to dilation and blurring, and the background complexity may be calculated based on the dilated and blurred second edge image, resulting in a background complexity image 550. The background complexity is calculated by utilizing the edge characteristics, the accuracy of the background complexity can be improved, and the accuracy of the first object region image can be further improved by utilizing the background complexity in the follow-up process.
In one embodiment, the step of correcting the first object region image based on the background attribute information to obtain the second object region image may include: and performing corrosion treatment on the first object region image according to the background complexity image to obtain a second object region image.
After the electronic device obtains the background complexity image, the first object region image can be corroded according to the background complexity image to obtain the second object region image. As a specific embodiment, a background complexity region with a complexity greater than a complexity threshold in the first object region image may be determined according to the background complexity image, and the object region around the background complexity region in the first object region image may be subjected to corrosion processing.
Alternatively, the background complexity image may represent the background complexity region and the background simple region by different values, for example, a region with a gray value of 255 may represent the background simple region, a region with a gray value of 0 may represent the background complexity region, or a region with a white color may represent the background simple region and a black color may represent the background complex region, but is not limited thereto.
Since the background complex region contains rich image content, the background complex region is easily mistaken for the foreground region, and the object region in the first object region image is inaccurate. The object region around the background complex region in the first object region image can be corroded to reduce the object region around the background complex region, and the situation that the background region is mistaken for the foreground region is improved. The corrosion treatment is an operation of locally minimum-seeking, a template can be determined from the background complex region, the template and the object region around the background complex region in the first object region image are utilized to calculate, and pixel points covering the template are reserved, namely, the effect of corroding the surrounding object region by utilizing the background complex region is realized. Alternatively, the eroded first object region image may be directly used as the second object region image.
As another embodiment, after the first object region image is subjected to the etching process, the first object region image before the etching process (i.e., the first object region image obtained initially) and the first object region image after the etching process may be fused to obtain the second object region image. Alternatively, the fusing method may include, but is not limited to, taking a mean value for fusing, assigning different weight coefficients for fusing, etc.
Specifically, a fusion weight corresponding to the corroded first object region image can be determined according to the background complexity image, and fusion processing is performed on the first object region image before corrosion processing and the corroded first object region image based on the fusion weight, so that a second object region image is obtained. The first object region image before the etching treatment and the first object region image after the etching treatment may be subjected to Alpha fusion treatment, and the Alpha fusion treatment may be to assign an Alpha value to each pixel point in the first object region image before the etching treatment and the first object region image after the etching treatment, so that the first object region image before the etching treatment and the first object region image after the etching treatment have different transparency. The fusion weight may include an Alpha value.
As an embodiment, the background complexity image may be used as an Alpha value of the first object region image after the etching treatment, and each pair of matching pixels in the first object region image before the etching treatment and the first object region image after the etching treatment may be fused according to the background complexity image to obtain the second object region image.
FIG. 6 is a schematic diagram of fusing a first object area image before etching with a first object area image after etching in one embodiment. As shown in fig. 6, the first object region image before the etching process and the first object region image after the etching process may be subjected to Alpha fusion processing, and the formula of the Alpha fusion processing may be as shown in formula (1):
I=αI 1 +(1-α)I 2 Formula (1);
wherein I is 1 Image 610, I representing the first object region after the etching treatment 2 The first target area image 620 before the etching process is represented, α represents the Alpha value of the first target area image 610 after the etching process, and I represents the fused second target area image 640. The background complexity image 630 may be used as the Alpha value of the first object region image 610 after the etching process, and the Alpha fusion process may be performed on the first object region image 610 after the etching process and the first object region image 620 before the etching process, to obtain the second object region image 640. The first object region image before corrosion treatment and the first object region image after corrosion treatment are fused, and the background complexity image is used as Alpha value to be fused, so that the accuracy of the obtained second object region image can be improved, the situation that the background region is mistaken for the foreground region is improved, and the subsequent image processing effect is improved.
In one embodiment, the step of correcting the first object region image based on the background attribute information to obtain the second object region image may include: and adjusting the tone curve of the first object region image according to the background complexity image to obtain a second object region image.
The tone curve (tone curve) of the first object region image may be used to reflect the brightness or color shade change of the first object region image, and the tone curve may be adjusted according to the background complexity, so as to adjust the pixel intensity of each pixel point in the first object region image, thereby adjusting the probability of dividing each pixel point into the object region. The pixel intensity can be represented by the gray value corresponding to each pixel point.
As a specific embodiment, the tone curve of the first object region image may be adjusted according to the negative correlation formula and the background complexity image. When the background complexity is higher, the tone curve can be reduced, and when the background complexity is lower, the tone curve can be improved, so that the intensity value of the object region around the background simple region with lower background complexity in the first object region image can be increased to improve the probability of the object region close to the background simple region, and the intensity value of the object region around the background complex region with higher background complexity in the first object region image is reduced to reduce the probability of the object region close to the background complex region.
In one embodiment, the negative correlation of the background complexity and the tone curve of the tone curve sigmoid function curve can be shown as formula (2):
wherein f (x) represents a tone curve, a and b are preset parameters, and c is a background complexity image. FIG. 7 is a schematic diagram of adjusting a tone curve based on background complexity in one embodiment. As shown in fig. 7, the curve 710 is an initial tone curve corresponding to the first object region image, the curve 720 is a tone curve corresponding to the background complexity is high, and the curve 730 is a tone curve corresponding to the background complexity is low. Note that, the tone curve may be represented by other functions besides the sigmoid function, for example, a linear function, a piecewise linear function, or the like, which is not limited thereto. The tone function of the first object region is adjusted by using the background complexity image, so that the possibility that the background region is mistaken for the foreground region can be reduced, and the accuracy of the determined object region is improved.
It should be noted that, the electronic device may perform corrosion processing on the first object area image based on the background complexity image, or adjust a tone curve of the first object area image, or perform corrosion processing on the first object area image by using the background complexity image, and simultaneously perform the corrosion processing on the tone curve of the first object area image, so as to obtain the second object area image, where a processing sequence is not limited herein.
In the embodiment of the application, the background complexity image can be utilized to correct the first object region image, so that the possibility that the background region is mistaken for the object region can be reduced, and the object region of the target object in the image to be processed can be more accurately determined.
In one embodiment, the above-mentioned background attribute information may include a background overexposed image including an overexposed background region having an exposure value greater than an exposure threshold in a background region of the image to be processed. As shown in fig. 8, the step of determining the background attribute information corresponding to the image to be processed according to the image to be processed and the first object region image may include the following steps:
step 802, determining an overexposed region with an exposure value greater than an exposure threshold in an image to be processed, and generating a first overexposed image.
The electronic equipment can acquire exposure values corresponding to all pixel points in the image to be processed, and generally when the camera acquires the image to be processed, the larger the light incoming quantity is, the more light rays are received by the pixel points, the larger the corresponding exposure values can be. Alternatively, the brightness value corresponding to each pixel point may be obtained, and the exposure value corresponding to each pixel point may be determined according to the brightness value, where the larger the brightness value is, the larger the corresponding exposure value may be, or the larger the contrast between the brightness value of the pixel point and the brightness value of the adjacent pixel is, the larger the corresponding exposure value may be, and so on.
The exposure value of each pixel point can be compared with the exposure threshold value, and the pixel point with the exposure value larger than the exposure threshold value is selected, wherein the region formed by the pixel points with the exposure value larger than the exposure threshold value can be used as the overexposure region, and the first overexposed image can comprise overexposed region information in the image to be processed.
Step 804, removing the object overexposure region in the first overexposed image according to the first object region image, to obtain a second overexposed image.
The object region in the overexposed image may be determined according to the first object region image, and the overexposed region in the object region except for the object overexposed region, that is, the object overexposed region in the overexposed image is removed, so as to obtain a second overexposed image in which the overexposed region except for the object region is reserved, that is, the second overexposed image only includes the overexposed background region.
Step 806, performing dilation processing and blurring processing on the second overexposed image to obtain a background overexposed image.
The electronic equipment can perform expansion processing and blurring processing on the overexposed region in the second overexposed image so as to enlarge the overexposed background region in the second overexposed image, so that the overexposed background region in the obtained background overexposed image is more obvious and accurate, and the accuracy of the background overexposed image is ensured.
FIG. 9 is a schematic diagram of a background overexposed image obtained in one embodiment. As shown in fig. 9, the electronic device may determine an overexposed region in the to-be-processed image 910 with an exposure value greater than an exposure threshold, generate a first overexposed image 920, remove a portrait overexposed region in the first overexposed image 920 according to the first object region image 930, obtain a second overexposed image 940, and perform an expansion process and a blurring process on the second overexposed image 940 to obtain a background overexposed image 950. The first object region image is utilized to remove the object overexposure region in the first overexposure image, so that the overexposed background region in the background overexposed image is more accurate, and the accuracy of subsequent adjustment of the first object region image can be improved.
In one embodiment, the step of correcting the first object region image based on the background attribute information to obtain the second object region image may include: and carrying out blurring processing on the edges of the object areas around the overexposed background area in the first object area image based on the background overexposed image to obtain a second object area image.
If the background area in the image to be processed is overexposed, the object area of the foreground is caused to be unnatural in subsequent image processing, for example, when the background of the image to be processed is virtual by using the determined object area, the edge definition of the object area of the foreground is high due to the overexposure of the background area, so that the virtual edge looks unnatural, and the image processing effect is affected. Therefore, in the embodiment of the application, the blurring process can be performed on the edge of the object area around the overexposed background area in the first object area image, so that the transition between the edge of the object area and the overexposed background area is natural, and the processing effect of the subsequent image processing can be improved.
Alternatively, the blurring process may use gaussian filtering, mean filtering, median filtering, and other processing manners, which are not limited herein.
In the embodiment of the application, the first object region image can be corrected by using the background overexposure image, so that the transition between the edge of the object region of the foreground and the overexposed background region is more natural, and the image processing effect of subsequent image blurring and the like is improved.
As shown in fig. 10, in one embodiment, another image processing method is provided and can be applied to the electronic device, and the method may include the following steps:
in step 1010, a target object contained in the image to be processed is identified and a first object region image is generated.
Step 1020, determining background attribute information corresponding to the image to be processed according to the image to be processed and the first object region image.
Step 1030, correcting the first object region image based on the background attribute information to obtain a second object region image.
Steps 1010 to 1030 may refer to the related descriptions in the above embodiments, and will not be described in detail herein. The background attribute information may be other attribute information, such as background color information, in addition to the background complexity image and the background overexposed image, so as to reduce the probability value of the pixel point in the first object region, which is closer to the background region in color, in the initial object region in the image to be processed, thereby preventing the background region from being mistaken as the foreground region. The electronic device may correct the first object region image based on one or more (two or more) kinds of background attribute information.
Step 1040, in the case that the image size of the second object area image is inconsistent with the image size of the image to be processed, performing upsampling filtering processing on the second object area image based on the background attribute information, to obtain a third object area image matched with the image to be processed.
In some embodiments, the image size of the first object region image may not be consistent with the image size of the image to be processed, resulting in a situation where the image size of the corrected second object region image is also not consistent with the image size of the image to be processed. For example, when an image to be processed is processed by using the image segmentation model to obtain a first object region image, the first object region may be first subjected to preprocessing such as rotation and clipping, so that the preprocessed first object region image conforms to the input image size specified by the image segmentation model. Therefore, under the condition that the image size of the second object area image is inconsistent with the image size of the image to be processed, the second object area image can be subjected to up-sampling filtering processing to obtain a third object area image matched with the image to be processed, and therefore the object area in the image to be processed can be accurately positioned by utilizing the third area image.
In one embodiment, the gray image of the image to be processed may be used as a guide image of a guide filter, and the upsampling filtering process may be performed on the second object area image by the guide filter to obtain a third object area image. When the guide filter performs up-sampling filtering processing on the second object region image, the guide filter can refer to the image information of the gray level image of the image to be processed, so that the characteristics such as texture, edge and the like of the output third object region image can be similar to the gray level image.
In some embodiments, the electronic device may also perform an upsampling filter process on the second object area image according to the background complexity of the image to be processed. Under the condition that the background complexity of the image to be processed is low, the background of the image to be processed is simple, and then the guide filter can be utilized to carry out up-sampling filtering processing on the image of the second object area; under the condition that the background complexity of the image to be processed is higher, the background of the image to be processed is described to be more complex, and then the bilinear interpolation algorithm can be directly utilized to carry out up-sampling filtering processing on the image of the second object region. Therefore, the problem that the background area is mistaken for the object area under the condition that the background of the image to be processed is complex can be prevented, and the accuracy of the image of the third object area after up-sampling is improved.
As a specific implementation manner, the electronic device may first perform region division on the second object region image according to the background complexity image of the image to be processed, so as to obtain a background simple region and a background complex region, where the background simple region is a background region with a complexity lower than or equal to the complexity threshold, and the background complex region is a background region with a complexity higher than the complexity threshold.
The up-sampling filtering processing can be respectively carried out by adopting different filtering methods aiming at the background simple region and the background complex region. And for the background simple region, performing up-sampling filtering processing on the object region around the background simple region in the second object region image through a guide filter to obtain a first filtering result.
And for the background complex region, performing up-sampling filtering processing on the object region around the background complex region in the second object region image by adopting a bilinear interpolation algorithm to obtain a second filtering result. The bilinear interpolation algorithm is linear interpolation expansion of interpolation functions with two variables, and the core idea is to perform linear interpolation in two directions respectively, the bilinear interpolation algorithm is to interpolate the amplified unknown pixel points by using known pixel points in the second object area image, and for each pixel point needing interpolation, calculation can be performed according to four known pixel points.
After the first filtering result and the second filtering result are obtained, the electronic device can fuse the first filtering result and the second filtering result to obtain a third object region image. As an embodiment, the first filtering result and the second filtering result may be subjected to Alpha fusion processing, and the background complexity image may be used as the Alpha value of the second filtering result, and the first filtering result and the second filtering result may be subjected to Alpha fusion processing by using the background complexity image, so as to obtain the third object region image.
For the object regions around the background regions with different complexity of the second object region image, different up-sampling filtering modes can be adopted respectively, so that the situation that the background region is mistaken for the object region can be reduced, and the accuracy of the determined object region is improved. It should be noted that other up-sampling filtering processing methods, such as a bicubic interpolation algorithm, a nearest neighbor interpolation algorithm, etc., may also be used, which is not limited in this embodiment of the present application.
In some embodiments, after the electronic device obtains the second object area image, the image to be processed may be subjected to blurring processing according to the second object area image, so as to obtain the target image. As an embodiment, the object region of the image to be processed may be determined from the second object region image, so that the object region may be accurately determined and separation of the object region from the background region may be achieved. The separated background area can be subjected to blurring treatment, then the blurring-treated background area is spliced with the object area to obtain a target image, and the object area can be highlighted after the blurring treatment is carried out on the background area.
As another embodiment, the image to be processed may be subjected to blurring processing to obtain a blurring image, and the image to be processed and the blurring image may be fused based on the second object region image to obtain the target image. The electronic device may perform blurring processing on the entire image to be processed, and fuse the image after blurring processing (i.e., the blurring image) with the image before blurring processing (i.e., the image to be processed). Specifically, the fusion may be Alpha fusion processing, and the second object region image may be used as an Alpha value corresponding to the blurring image, and the image to be processed and the blurring image may be fused based on the second object region, so as to obtain the target image.
Fig. 11 is a schematic diagram of blurring an image to be processed based on a second object region image in one embodiment. As shown in fig. 11, the electronic device may perform blurring processing on the image 1110 to be processed to obtain a blurred image 1120, and use the second object region image 1130 as an Alpha value of the blurred image 1120, and fuse the image 1110 to be processed and the blurred image 1120 based on the second object region 1130 to obtain the target image 1130. The fusion of the image to be processed and the blurring image is carried out by utilizing the second object area image, so that the accuracy of separating the foreground from the background can be improved, the target character image obtained after blurring processing is more natural, and the background blurring effect of the image is improved.
In some embodiments, if the up-sampling filtering processing is performed on the second object area image to obtain a third object area image, the blurring processing may be performed on the image to be processed according to the third object area image to obtain the target image. The manner of blurring the image to be processed by using the third object area image may be identical to the manner of blurring the image to be processed by using the second object area image in the above embodiment, and thus, a detailed description is not repeated here.
In the embodiment of the application, the up-sampling filtering processing can be performed on the second object region image, so that a third object region image with higher resolution can be obtained, and the fineness and the accuracy of the determined object region are higher.
As shown in fig. 12, in one embodiment, an image processing apparatus 1200 is provided and can be applied to the above-mentioned electronic device, where the image processing apparatus 1200 includes an identification module 1210, a background information determination module 1220, and a correction module 1230.
The identifying module 1210 is configured to identify a target object included in the image to be processed, and generate a first object area image, where the first object area image is used to describe an initial object area of the target object in the image to be processed.
The background information determining module 1220 is configured to determine background attribute information corresponding to the image to be processed according to the image to be processed and the first object area image.
The correction module 1230 is configured to correct the first object area image based on the background attribute information to obtain a second object area image, where the second object area image is used to describe an object area of the target object after correction in the image to be processed.
In the embodiment of the application, the target object contained in the image to be processed is identified, and the first object region image is generated, wherein the first object region image is used for describing the initial object region of the target object in the image to be processed, the background attribute information corresponding to the image to be processed is determined according to the image to be processed and the object region image, the first object region image is corrected based on the background attribute information, so that the second object region image is obtained, the initial object region of the target object can be corrected based on the background belonging information of the image to be processed, so that a more accurate and fine corrected object region is obtained, the object region of the target object in the image to be processed can be accurately determined by utilizing the second object region image, and the image processing effect of the subsequent image to be processed in the image processing such as foreground and background separation can be improved.
In one embodiment, the context attribute information includes a context complexity image. The background information determining module 1220 is further configured to perform edge detection on the image to be processed to obtain a first edge image; removing the edge of the target object in the first edge image according to the first object area image to obtain a second edge image; and performing expansion processing and blurring processing on the second edge image to obtain a background complexity image.
In one embodiment, the correction module 1230 is further configured to perform erosion processing on the first object region image according to the background complexity image to obtain a second object region image.
In one embodiment, the correction module 1230 is further configured to determine, from the background complexity image, a background complexity region in the first object region image having a complexity greater than the complexity threshold; performing corrosion treatment on an object area around a background complex area in the first object area image; and fusing the first object area image before the corrosion treatment with the first object area image after the corrosion treatment to obtain a second object area image.
In one embodiment, the correction module 1230 is further configured to determine a fusion weight corresponding to the eroded first object region image according to the background complexity image; and performing fusion processing on the first object region image before the corrosion processing and the first object region image after the corrosion processing based on the fusion weight to obtain a second object region image.
In one embodiment, the correction module 1230 is further configured to adjust the tone curve of the first object region image according to the background complexity image to obtain the second object region image.
In one embodiment, the correction module 1230 is further configured to adjust the tone curve of the first object region image according to the negative correlation equation and the background complexity image.
In the embodiment of the application, the background complexity image can be utilized to correct the first object region image, so that the possibility that the background region is mistaken for the object region can be reduced, and the object region of the target object in the image to be processed can be more accurately determined.
In one embodiment, the background attribute information comprises a background overexposed image comprising overexposed background regions in the background region of the image to be processed having an exposure value greater than an exposure threshold.
The background information determining module 1220 is further configured to determine an overexposed area in the image to be processed, where the exposure value is greater than the exposure threshold, and generate a first overexposed image; removing an object overexposure region in the first overexposure image according to the first object region image to obtain a second overexposure image; and performing expansion processing and blurring processing on the second overexposed image to obtain a background overexposed image.
In one embodiment, the correction module 1230 is further configured to blur the edge of the object region around the overexposed background region in the first object region image based on the background overexposed image, to obtain a second object region image.
In the embodiment of the application, the first object region image can be corrected by using the background overexposure image, so that the transition between the edge of the object region of the foreground and the overexposed background region is more natural, and the image processing effect of subsequent image blurring and the like is improved.
In one embodiment, the image processing apparatus 1200 includes a filtering module in addition to the identifying module 1210, the background information determining module 1220 and the correcting module 1230.
And the filtering module is used for carrying out up-sampling filtering processing on the second object region image based on the background attribute information under the condition that the image size of the second object region image is inconsistent with the image size of the image to be processed, so as to obtain a third object region image matched with the image to be processed.
In one embodiment, the context attribute information includes a context complexity image. The filtering module comprises a dividing unit, a first filtering unit, a second filtering unit and a fusion unit.
The dividing unit is used for carrying out region division on the second object region image according to the background complexity image to obtain a background simple region and a background complexity region, wherein the background simple region is a background region with complexity lower than or equal to a complexity threshold value, and the background complexity region is a background region with complexity higher than the complexity threshold value.
And the first filtering unit is used for carrying out up-sampling filtering processing on the object area around the background simple area in the second object area image through the guide filter to obtain a first filtering result.
And the second filtering unit is used for carrying out up-sampling filtering processing on the object area around the background complex area in the second object area image by adopting a bilinear interpolation algorithm to obtain a second filtering result.
And the fusion unit is used for fusing the first filtering result and the second filtering result to obtain a third object region image.
In one embodiment, the image processing apparatus 1200 further includes a blurring module.
The blurring module is used for blurring the image to be processed to obtain a blurring image, and fusing the image to be processed and the blurring image based on the second object area image to obtain a target image.
In the embodiment of the application, the up-sampling filtering processing can be performed on the second object region image, so that a third object region image with higher resolution can be obtained, and the fineness and the accuracy of the determined object region are higher.
Fig. 13 is a block diagram of an electronic device in one embodiment. As shown in fig. 13, the electronic device 1300 may include one or more of the following: a processor 1310, a memory 1320 coupled to the processor 1310, wherein the memory 1320 may store one or more computer programs that may be configured to implement the methods as described in the embodiments above when executed by the one or more processors 1310.
Processor 1310 may include one or more processing cores. The processor 1310 utilizes various interfaces and lines to connect various portions of the overall electronic device 1300, execute various functions of the electronic device 1300, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1320, and invoking data stored in the memory 1320. Alternatively, the processor 1310 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 1310 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 1310 and may be implemented solely by a single communication chip.
The Memory 1320 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (ROM). Memory 1320 may be used to store instructions, programs, code, sets of codes, or instruction sets. The memory 1320 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The storage data area may also store data or the like created by the electronic device 1300 in use.
It is to be appreciated that the electronic device 1300 may include more or fewer structural elements than those described in the above structural block diagrams, including, for example, a power module, physical key, wiFi (Wireless Fidelity ) module, speaker, bluetooth module, sensor, etc., and may not be limited herein.
The present application discloses a computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method as described in the above embodiments.
The present embodiments disclose a computer program product comprising a non-transitory computer readable storage medium storing a computer program, which when executed by a processor, implements a method as described in the above embodiments.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Wherein the storage medium may be a magnetic disk, an optical disk, a ROM, etc.
Any reference to memory, storage, database, or other medium as used herein may include non-volatile and/or volatile memory. Suitable nonvolatile memory can include ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (Electrically Erasable PROM, EEPROM), or flash memory. Volatile memory can include random access memory (random access memory, RAM), which acts as external cache memory. By way of illustration and not limitation, RAM may take many forms, such as Static RAM (SRAM), dynamic RAM (Dynamic Random Access Memory, DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDR SDRAM), enhanced SDRAM (Enhanced Synchronous DRAM, ESDRAM), synchronous Link DRAM (SLDRAM), memory bus Direct RAM (Rambus DRAM), and Direct memory bus dynamic RAM (DRDRAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art will also appreciate that the embodiments described in the specification are all alternative embodiments and that the acts and modules referred to are not necessarily required in the present application.
In various embodiments of the present application, it should be understood that the size of the sequence numbers of the above processes does not mean that the execution sequence of the processes is necessarily sequential, and the execution sequence of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing has described in detail the image processing method, apparatus, electronic device and computer readable storage medium disclosed in the embodiments of the present application, and specific examples have been applied to illustrate the principles and embodiments of the present application, where the foregoing description of the embodiments is only for aiding in understanding the method and core idea of the present application. Meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (15)

1. An image processing method, comprising:
identifying a target object contained in an image to be processed, and generating a first object region image, wherein the first object region image is used for describing an initial object region of the target object in the image to be processed;
Determining background attribute information corresponding to the image to be processed according to the image to be processed and the first object area image; the background attribute information comprises one or more of background complexity, background exposure and background color information;
and correcting the first object region image based on the background attribute information to obtain a second object region image, wherein the second object region image is used for describing the corrected object region of the target object in the image to be processed.
2. The method of claim 1, wherein the background attribute information comprises a background complexity image; the correcting the first object area image based on the background attribute information to obtain a second object area image includes:
and performing corrosion treatment on the first object region image according to the background complexity image to obtain a second object region image.
3. The method according to claim 2, wherein the corroding the first object area image according to the background complexity to obtain a second object area image includes:
determining a background complexity region with complexity greater than a complexity threshold in the first object region image according to the background complexity image;
Performing corrosion treatment on an object region around the background complex region in the first object region image;
and fusing the first object area image before corrosion treatment with the first object area image after corrosion treatment to obtain a second object area image.
4. The method of claim 3, wherein fusing the first object region image before the etching process with the first object region image after the etching process to obtain the second object region image comprises:
determining fusion weights corresponding to the corroded first object region images according to the background complexity images;
and carrying out fusion processing on the first object region image before corrosion processing and the first object region image after corrosion processing based on the fusion weight to obtain a second object region image.
5. The method of claim 1, wherein the background attribute information comprises a background complexity image; the correcting the first object area image based on the background attribute information to obtain a second object area image includes:
and adjusting the tone curve of the first object region image according to the background complexity image to obtain a second object region image.
6. The method of claim 5, wherein adjusting the tone curve of the first object region image based on the background complexity image to obtain a second object region image comprises:
and adjusting the tone curve of the first object region image according to the negative correlation formula and the background complexity image.
7. The method according to any one of claims 2 to 6, wherein determining the background attribute information corresponding to the image to be processed according to the image to be processed and the first object area image includes:
performing edge detection on the image to be processed to obtain a first edge image;
removing the edge of the target object in the first edge image according to the first object area image to obtain a second edge image;
and performing expansion processing and blurring processing on the second edge image to obtain a background complexity image.
8. The method according to claim 1, wherein the background attribute information comprises a background overexposed image comprising overexposed background areas of the image to be processed having an exposure value greater than an exposure threshold; the correcting the first object area image based on the background attribute information to obtain a second object area image includes:
And carrying out blurring processing on the edges of the object areas around the overexposed background area in the first object area image based on the background overexposed image to obtain a second object area image.
9. The method according to claim 8, wherein the determining the background attribute information corresponding to the image to be processed according to the image to be processed and the first object area image includes:
determining an overexposed region with an exposure value larger than an exposure threshold value in the image to be processed, and generating a first overexposed image;
removing an object overexposure region in the first overexposure image according to the first object region image to obtain a second overexposure image;
and performing expansion processing and blurring processing on the second overexposed image to obtain a background overexposed image.
10. The method according to claim 1, wherein after the obtaining of the second object region image, the method further comprises:
and under the condition that the image size of the second object area image is inconsistent with the image size of the image to be processed, carrying out up-sampling filtering processing on the second object area image based on the background attribute information to obtain a third object area image matched with the image to be processed.
11. The method of claim 10, wherein the background attribute information comprises a background complexity image; the step of performing up-sampling filtering processing on the second object region image to obtain a third object region image matched with the image to be processed, includes:
performing region division on the second object region image according to the background complexity image to obtain a background simple region and a background complex region, wherein the background simple region is a background region with complexity lower than or equal to a complexity threshold value, and the background complex region is a background region with complexity higher than the complexity threshold value;
performing up-sampling filtering processing on an object region around the background simple region in the second object region image through a guide filter to obtain a first filtering result;
performing up-sampling filtering processing on an object region around the background complex region in the second object region image by adopting a bilinear interpolation algorithm to obtain a second filtering result;
and fusing the first filtering result and the second filtering result to obtain a third object region image.
12. The method according to any one of claims 1-6, 8-11, further comprising:
Blurring the image to be processed to obtain a blurring image;
and fusing the image to be processed and the blurring image based on the second object region image to obtain a target image.
13. An image processing apparatus, comprising:
the identification module is used for identifying a target object contained in an image to be processed and generating a first object area image, wherein the first object area image is used for describing an initial object area of the target object in the image to be processed;
the background information determining module is used for determining background attribute information corresponding to the image to be processed according to the image to be processed and the first object area image; the background attribute information comprises one or more of background complexity, background exposure and background color information;
the correction module is used for correcting the first object area image based on the background attribute information to obtain a second object area image, and the second object area image is used for describing an object area of the target object after correction in the image to be processed.
14. An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to implement the method of any of claims 1 to 12.
15. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method according to any one of claims 1 to 12.
CN202110659083.5A 2021-06-15 2021-06-15 Image processing method, device, electronic equipment and computer readable storage medium Active CN113298829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110659083.5A CN113298829B (en) 2021-06-15 2021-06-15 Image processing method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110659083.5A CN113298829B (en) 2021-06-15 2021-06-15 Image processing method, device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113298829A CN113298829A (en) 2021-08-24
CN113298829B true CN113298829B (en) 2024-01-23

Family

ID=77328156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110659083.5A Active CN113298829B (en) 2021-06-15 2021-06-15 Image processing method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113298829B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885104B (en) * 2022-05-06 2024-04-26 北京银河方圆科技有限公司 Method for adaptive adjustment of camera, readable storage medium and navigation system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183421A (en) * 2006-11-15 2008-05-21 佳能株式会社 Image forming apparatus and image processing method
CN101615252A (en) * 2008-06-25 2009-12-30 中国科学院自动化研究所 A kind of method for extracting text information from adaptive images
WO2016127883A1 (en) * 2015-02-12 2016-08-18 阿里巴巴集团控股有限公司 Image area detection method and device
WO2020001197A1 (en) * 2018-06-29 2020-01-02 Oppo广东移动通信有限公司 Image processing method, electronic device and computer readable storage medium
CN110766639A (en) * 2019-10-30 2020-02-07 北京迈格威科技有限公司 Image enhancement method and device, mobile equipment and computer readable storage medium
CN111242843A (en) * 2020-01-17 2020-06-05 深圳市商汤科技有限公司 Image blurring method, image blurring device, image blurring equipment and storage device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183421A (en) * 2006-11-15 2008-05-21 佳能株式会社 Image forming apparatus and image processing method
CN101615252A (en) * 2008-06-25 2009-12-30 中国科学院自动化研究所 A kind of method for extracting text information from adaptive images
WO2016127883A1 (en) * 2015-02-12 2016-08-18 阿里巴巴集团控股有限公司 Image area detection method and device
WO2020001197A1 (en) * 2018-06-29 2020-01-02 Oppo广东移动通信有限公司 Image processing method, electronic device and computer readable storage medium
CN110766639A (en) * 2019-10-30 2020-02-07 北京迈格威科技有限公司 Image enhancement method and device, mobile equipment and computer readable storage medium
CN111242843A (en) * 2020-01-17 2020-06-05 深圳市商汤科技有限公司 Image blurring method, image blurring device, image blurring equipment and storage device

Also Published As

Publication number Publication date
CN113298829A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN111402135B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110428366B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110149482B (en) Focusing method, focusing device, electronic equipment and computer readable storage medium
EP3757890A1 (en) Method and device for image processing, method and device for training object detection model
CN113313661B (en) Image fusion method, device, electronic equipment and computer readable storage medium
CN110473185B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
EP3480784B1 (en) Image processing method, and device
WO2021057474A1 (en) Method and apparatus for focusing on subject, and electronic device, and storage medium
US11538175B2 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2022261828A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111028137A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108616700B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110650288B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN113674303B (en) Image processing method, device, electronic equipment and storage medium
CN110881103B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113673474B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN113610884B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN113298829B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN112581481B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109118427B (en) Image light effect processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant