Nothing Special   »   [go: up one dir, main page]

CN117135445A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN117135445A
CN117135445A CN202311090137.6A CN202311090137A CN117135445A CN 117135445 A CN117135445 A CN 117135445A CN 202311090137 A CN202311090137 A CN 202311090137A CN 117135445 A CN117135445 A CN 117135445A
Authority
CN
China
Prior art keywords
image
pixel point
parameter
determining
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311090137.6A
Other languages
Chinese (zh)
Inventor
凡芳
唐文峰
曹治国
彭珏文
张华琪
顾弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311090137.6A priority Critical patent/CN117135445A/en
Publication of CN117135445A publication Critical patent/CN117135445A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and device, and belongs to the technical field of image processing. The image processing method comprises the following steps: acquiring a first image, a second image and a first parameter of a camera of the electronic equipment, wherein the first image is acquired by the camera, and the second image is a parallax image of the first image; determining a two-dimensional coordinate graph according to the size parameter of the first image; determining a third image according to the two-dimensional coordinate graph and the first parameter, wherein the third image is a state coordinate graph of the first image; and rendering the first image according to the second image, the third image and the second parameter of the camera to obtain a fourth image.

Description

Image processing method and device
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method and device.
Background
Currently, when a user shoots an image by using an electronic device, the user generally uses an image mode or a large aperture mode to simulate an out-of-focus scenery blurring effect generated when shooting a large aperture single-lens reflex through the electronic device.
However, the single lens reflex camera is affected by its own physical structural defect when taking an image, so that the dark angle and aperture erosion phenomenon occur. Under the condition that the physical structure defect of the single-lens reflex camera does not exist in the camera of the electronic equipment, when the blurring effect of the out-of-focus scattering scene shot by the single-lens reflex camera is simulated by the electronic equipment, the real camera characteristic is difficult to simulate, and the reality of simulated shooting is reduced.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and device, which can solve the problems that a camera of electronic equipment is difficult to simulate real camera characteristics and the reality of shooting by a camera simulation camera of the electronic equipment is low.
In a first aspect, an embodiment of the present application provides an image processing method, including: acquiring a first image, a second image and a first parameter of a camera of the electronic equipment, wherein the first image is acquired by the camera, and the second image is a parallax image of the first image; determining a two-dimensional coordinate graph according to the size parameter of the first image; determining a third image according to the two-dimensional coordinate graph and the first parameter, wherein the third image is a state coordinate graph of the first image; and rendering the first image according to the second image, the third image and the second parameter of the camera to obtain a fourth image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a first image, a second image and a first parameter of a camera of the electronic device, the first image is acquired by the camera, and the second image is a parallax image of the first image; the processing unit is used for determining a two-dimensional coordinate graph according to the size parameter of the first image; the processing unit is further used for determining a third image according to the two-dimensional coordinate graph and the first parameter, wherein the third image is a state coordinate graph of the first image; the processing unit is further used for rendering the first image according to the second image, the third image and the second parameter of the camera to obtain a fourth image.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, the program or instructions implementing the steps of the image processing method as in the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method as in the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, the chip including a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute programs or instructions for implementing the steps of the image processing method as in the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to perform the steps of the image processing method as in the first aspect.
In the image processing method provided by the embodiment of the application, a first image, a second image and a first parameter of a camera of electronic equipment are acquired, wherein the first image is acquired by the camera, and the second image is a parallax image of the first image; determining a two-dimensional coordinate graph according to the size parameter of the first image; determining a third image according to the two-dimensional coordinate graph and the first parameter, wherein the third image is a state coordinate graph of the first image; and rendering the first image according to the second image, the third image and the second parameter of the camera to obtain a fourth image. By the image processing method, the first parameter of the camera and the first image are combined to obtain the state coordinate graph of the first image, and then the state coordinate graph of the first image, the second parameter of the camera and the parallax graph of the first image are combined to conduct rendering processing on the first image to obtain the fourth image. Therefore, the dark angle and caliber erosion phenomenon generated during shooting of the camera can be simulated, so that the real camera characteristic can be simulated through the electronic equipment, and the reality of shooting by a camera simulation camera of the electronic equipment is improved.
Drawings
FIG. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 2 is a second flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a third flowchart of an image processing method according to an embodiment of the present application;
FIG. 4 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 5 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating a method of image processing according to an embodiment of the present application;
FIG. 7 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 8 is a block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 9 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type not limited to the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides an image processing method, which may include the following steps S102 to S106:
s102: and acquiring a first image, a second image and a first parameter of a camera of the electronic equipment.
The image processing method provided by the embodiment of the application is executed by the electronic equipment, and the electronic equipment can be specifically electronic equipment with shooting functions such as a smart phone, a tablet personal computer, a notebook computer and the like, and is not particularly limited.
The first image may specifically be a captured image collected by a camera of the electronic device in real time during a capturing process, and the first image may specifically also be a historical captured image stored in an album of the electronic device, which is not limited herein.
Further, the second image is a disparity map of the first image.
In a practical application, the electronic device may include two cameras, such as binocular cameras, and the disparity map is determined according to pixel position deviations of images of the same shooting scene in different cameras. For example, the pixel coordinate of the P point in the shooting scene in the first camera is x, the pixel coordinate of the P point in the second camera is x+d, and at this time, d is the value of the corresponding pixel coordinate x in the disparity map.
Further, the first parameter may specifically be a camera parameter related to caliber corrosion and dark angle of a camera of the electronic device, where the camera parameter can control caliber corrosion and dark angle states corresponding to different areas of the image.
In an actual application process, the first parameter may specifically include a first sub-parameter and a second sub-parameter. The first sub-parameter is used for controlling the change rate of the image state along with the distance from the pixel point to the center of the image, and the second sub-parameter is used for controlling the threshold value of the change of the image state.
S104: and determining a two-dimensional coordinate graph according to the size parameter of the first image.
The size parameter of the first image may specifically include a width and a height of the first image.
Specifically, in the image processing method provided by the embodiment of the application, after the first image is acquired, a two-dimensional coordinate graph of the first image is generated according to the width and the height of the first image.
In the practical application process, in the case that the width of the first image is W and the height of the first image is H, the two-dimensional coordinate graph (X, Y) of the first image may be specifically determined by the following formula (1) and formula (2):
wherein, X represents the horizontal coordinate value in the two-dimensional coordinate graph corresponding to each pixel point in the first image, Y represents the vertical coordinate value in the two-dimensional coordinate graph corresponding to each pixel point in the first image, W represents the width of the first image, and H represents the height of the first image.
S106: and determining a third image according to the two-dimensional coordinate graph and the first parameter.
The third image is a state coordinate graph of the first image.
Specifically, in the image processing method provided by the embodiment of the application, after the two-dimensional coordinate graph of the first image is generated, normalization processing is performed on the two-dimensional coordinate graph of the first image based on the normalization function, so as to obtain the normalized two-dimensional coordinate graph.
In an actual application process, specifically, a center point of a two-dimensional coordinate graph of the first image may be used as an origin, and normalization processing is performed on the two-dimensional coordinate graph of the first image by the following formula (3) and formula (4) so as to normalize each coordinate value in the two-dimensional coordinate graph of the first image to a value between-1 and 1:
X n =norm(X), (3)
Y n =norm(Y), (4)
wherein X is n Representing the value after normalization processing of the abscissa value in the two-dimensional coordinate graph, Y n The normalized numerical value of the ordinate in the two-dimensional coordinate graph is represented, norm represents the normalization function, X represents the abscissa value in the two-dimensional coordinate graph corresponding to each pixel in the first image, and Y represents the ordinate value in the two-dimensional coordinate graph corresponding to each pixel in the first image.
On the basis, in the image processing method provided by the embodiment of the application, after the two-dimensional coordinate graph of the first image is normalized, the normalized two-dimensional coordinate graph is processed according to the normalized two-dimensional coordinate graph and the camera parameters related to the caliber corrosion and the dark angle of the camera of the electronic equipment, namely the first parameters, so that the state coordinate graph of the first image, namely the third image, is obtained.
In the practical application process, the two-dimensional graph after normalization processing is (X n ,Y n ) In particular, the state coordinate graph (X) of the first image can be obtained by the following equation (5) and equation (6) S ,Y S ) And (3) determining:
X S =k 1 ×(X n -k 2 ×(X n /((X n ) 2 +(Y n ) 2 ) 1/2 )), (5)
Y S =k 1 ×(Y n -k 2 ×(Y n /((X n ) 2 +(Y n ) 2 ) 1/2 )), (6)
wherein X is S An abscissa representing a state graph of the first image, Y S Ordinate, k, representing a state graph of the first image 1 Representing the first sub-parameter, k 1 For controlling the rate of change, k, of image state with the distance from pixel point to image center 2 Representing the second sub-parameter, k 2 Threshold value, X, for controlling image state change n Representing the abscissa of the normalized two-dimensional graph, Y n And the ordinate of the two-dimensional graph after normalization processing is represented.
S108: and rendering the first image according to the second image, the third image and the second parameter of the camera to obtain a fourth image.
Wherein the second image is a parallax image of the first image.
Further, the third image is a state graph of the first image.
Further, the second parameter may specifically be a shooting parameter of a camera of the electronic device in a shooting process. In an actual application process, the second parameter may specifically include an ambiguity parameter and a focus parallax parameter of a camera of the electronic device, which are not limited herein.
Specifically, in the image processing method provided by the embodiment of the application, a first image captured by a camera of an electronic device and a parallax image of the first image are obtained, a two-dimensional coordinate image of the first image is determined according to a size parameter of the first image, and a state coordinate image of the first image, namely a third image, is generated according to camera parameters related to caliber corrosion and dark angle of the camera, namely the first parameter, of the electronic device and the two-dimensional coordinate image of the first image. On the basis, the first image is rendered according to the ambiguity parameter and the focusing parallax parameter of the camera of the electronic equipment, namely the second parameter, the state coordinate graph of the first image and the parallax graph of the first image, so that a fourth image is obtained. Therefore, the physical optical principle is integrated into an algorithm, a state coordinate graph is generated according to camera parameters, the visibility detection of an optical path is realized in the rendering process, the effects of real and controllable caliber corrosion and dark angle are simulated, and the authenticity of shooting by an electronic equipment simulation camera is improved.
According to the image processing method provided by the embodiment of the application, the first image, the second image and the first parameter of the camera of the electronic equipment are obtained, wherein the first image is acquired by the camera, and the second image is a parallax image of the first image; determining a two-dimensional coordinate graph according to the size parameter of the first image; determining a third image according to the two-dimensional coordinate graph and the first parameter, wherein the third image is a state coordinate graph of the first image; and rendering the first image according to the second image, the third image and the second parameter of the camera to obtain a fourth image. By the image processing method, the first parameter of the camera and the first image are combined to obtain the state coordinate graph of the first image, and then the state coordinate graph of the first image, the second parameter of the camera and the parallax graph of the first image are combined to conduct rendering processing on the first image to obtain the fourth image. Therefore, the dark angle and caliber erosion phenomenon generated during shooting of the camera can be simulated, so that the real camera characteristic can be simulated through the electronic equipment, and the reality of shooting by a camera simulation camera of the electronic equipment is improved.
In the embodiment of the present application, as shown in fig. 2, the above S108 may specifically include the following S108a to S108e:
s108a: and performing first transformation processing on the first image to obtain a fifth image.
Wherein the first image is a fully focused image.
Further, the above-described first transformation process is implemented based on a first transformation algorithm. In the practical application process, the first transformation algorithm may be specifically an image enhancement processing algorithm such as a gamma transformation algorithm, which is not limited herein.
Specifically, in the image processing method provided by the embodiment of the application, in the process of rendering the first image, the first image is subjected to the first transformation processing, so that the first image is subjected to the color space conversion, and the first image in the original color space is converted into the fifth image in the linear color space.
S108b: and determining the blur radius and the blur sign of each pixel point in the fifth image according to the second image and the second parameter.
The values of the fuzzy symbols can specifically comprise 1 and-1.
Further, the blur radius is used for controlling a neighborhood range of the blurring process for the neighborhood pixel point of each pixel point in the fifth image.
S108c: and determining a first weight parameter of the fifth image according to the blur radius of each pixel point in the fifth image.
The first weight parameter may specifically be a weight value assigned to each neighboring pixel point of each pixel point in the fifth image when the neighboring pixel point of each pixel point in the fifth image is subjected to fuzzy processing.
S108d: and determining a second weight parameter of the fifth image according to the blur radius and the blur sign of each pixel point in the third image and the fifth image.
The second weight parameter may specifically be a caliber corrosion weight value of each neighboring pixel of each pixel in the fifth image, where the caliber corrosion weight value may be used to represent a detection result of performing optical path visibility detection on each neighboring pixel of each pixel in the fifth image.
S108e: and determining a fourth image according to the first weight parameter and the second weight parameter.
Specifically, in the image processing method provided by the embodiment of the application, in the process of rendering the first image, the first image is subjected to first transformation processing, the first image is converted into the fifth image in the linear color space, and then the blur radius and the blur symbol of each pixel point in the fifth image are determined according to the parallax map of the first image, the blur degree parameter and the focus parallax parameter of the camera of the electronic device. On the basis, a first weight parameter of the fifth image is determined based on the blur radius of each pixel point in the fifth image, and a second weight parameter of the fifth image is determined based on the state coordinate graph of the first image, the blur radius of each pixel point in the fifth image and the blur sign. Further, an image is rendered based on the first weight parameter and the second weight parameter, and a fourth image simulating the aperture erosion and the dark angle effect of the camera is obtained. Therefore, the physical optical principle is integrated into an algorithm, a state coordinate graph is generated according to camera parameters, the visibility detection of an optical path is realized in the rendering process, the effects of real and controllable caliber corrosion and dark angle are simulated, and the authenticity of shooting by an electronic equipment simulation camera is improved.
According to the embodiment of the application, in the process of rendering the first image according to the second image, the third image and the second parameter of the camera to obtain the fourth image, the first image is subjected to first transformation to obtain the fifth image; determining a blur radius and a blur sign of each pixel point in the fifth image according to the second image and the second parameter; determining a first weight parameter of the fifth image according to the blur radius of each pixel point in the fifth image; determining a second weight parameter of the fifth image according to the blur radius and the blur sign of each pixel point in the third image and the fifth image; and determining a fourth image according to the first weight parameter and the second weight parameter. Therefore, the dark angle and caliber erosion phenomenon generated during shooting of the camera can be simulated, so that the real camera characteristic can be simulated through the electronic equipment, and the shooting authenticity of the simulated camera is improved.
In the embodiment of the present application, as shown in fig. 3, the above S108a may specifically include the following S110 and S112:
s110: and performing boundary filling on the first image.
Specifically, in the image processing method provided by the embodiment of the application, before the first transformation processing is performed on the first image, boundary copy filling is performed on the first image so as to enlarge the size of the first image and obtain the fully focused first image to be rendered.
S112: and performing first transformation processing on the first image after the boundary filling according to a first transformation algorithm to obtain a fifth image.
The first transformation algorithm may be specifically an image enhancement processing algorithm such as a gamma transformation algorithm, and is not limited herein.
Specifically, in the image processing method provided by the embodiment of the application, in the process of rendering the first image, boundary copy filling is performed on the first image to obtain a fully focused first image to be rendered, and then, based on a first transformation algorithm, the first image is subjected to first transformation processing to convert the first image in an original color space into a fifth image in a linear color space.
According to the embodiment of the application, in the process of performing first transformation processing on the first image to obtain the fifth image, boundary filling is performed on the first image; and performing first transformation processing on the first image after the boundary filling according to a first transformation algorithm to obtain a fifth image. In this way, the fifth image is obtained by performing boundary copy filling and first transformation processing on the first image, so that the accuracy of determination of the fifth image is ensured, and the authenticity of the follow-up camera shooting effect based on the fifth image simulation is further improved.
In the embodiment of the present application, the second parameters include an ambiguity parameter and a focus parallax parameter, and on this basis, as shown in fig. 4, the above S108b may specifically include the following S114 and S116:
s114: and determining the blur radius of each pixel point in the fifth image according to the pixel value, the blur degree parameter and the focusing parallax parameter of each pixel point in the second image.
The blur degree parameter is used for describing the blur degree of an image shot by the camera, and the clearer the image is, the smaller the blur degree parameter is, the more blurred the image is, and the greater the blur degree parameter is.
Further, the focus parallax parameter is used to describe the parallax deviation degree of the image captured by the camera of the electronic device.
In the practical application process, the blur radius of each pixel point in the fifth image can be specifically determined by the following formula (7):
r=K|D i -d f |, (7)
wherein r is the blur radius of each pixel point in the fifth image, D i For the pixel value of the pixel point corresponding to the pixel point i in the fifth image in the second image, K is the ambiguity parameter, d f For focusing parallax parameters, |D i -d f I represents D i -d f Is the absolute value of (c).
S116: and determining a fuzzy symbol of each pixel point in the fifth image according to the comparison result of the pixel value of each pixel point in the second image and the focusing parallax parameter.
The focusing parallax parameter is used for describing parallax deviation degree of images shot by a camera of the electronic equipment.
Further, the values of the fuzzy symbols may specifically include 1 and-1.
In the practical application process, the fuzzy symbol of each pixel point in the fifth image can be specifically determined by the following formula (8):
if D i -d f ≥0,s=1,
If D i -d f <0,s=-1, (8)
Wherein s is a fuzzy symbol of each pixel point in the fifth image, D i The pixel value d of the pixel point corresponding to the pixel point i in the fifth image in the second image f Is a focus parallax parameter.
Specifically, in the image processing method provided by the embodiment of the present application, after the fifth image is obtained, each pixel point in the fifth image is traversed, the blur radius of each pixel point in the fifth image is determined based on the pixel value, the blur degree parameter and the focus parallax parameter of each pixel point in the second image, and the blur symbol of each pixel point in the fifth image is determined based on the comparison result of the pixel value and the focus parallax parameter of each pixel point in the second image.
According to the embodiment of the application, the second parameters comprise the ambiguity parameters and the focus parallax parameters, and in the process of determining the ambiguity radius and the ambiguity sign of each pixel point in the fifth image according to the second image and the second parameters, the ambiguity radius of each pixel point in the fifth image is determined according to the pixel value, the ambiguity parameters and the focus parallax parameters of each pixel point in the second image; and determining a fuzzy symbol of each pixel point in the fifth image according to the comparison result of the pixel value of each pixel point in the second image and the focusing parallax parameter. Thus, the accuracy of determining the blur radius and the blur symbol is ensured, and the accuracy of determining the subsequent fourth image is further ensured.
In the embodiment of the present application, as shown in fig. 5, the above S108c may specifically include the following S118 and S120:
s118: and determining the neighborhood range of each pixel point in the fifth image according to the blur radius of each pixel point in the fifth image.
The blur radius is used for controlling a neighborhood range for performing blur processing on a neighborhood pixel point of each pixel point in the fifth image.
S120: and determining a first weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point, the blur radius and the coordinate value of the neighborhood pixel point in the fifth image.
Specifically, in the image processing method provided by the embodiment of the application, after the fifth image is obtained, each pixel point in the fifth image is traversed, the coordinate difference value of each pixel point in the fifth image and the neighboring pixel point in the horizontal direction and the vertical direction is determined according to the coordinate value of each pixel point in the fifth image and the neighboring pixel point, and further, the first weight parameter of each neighboring pixel point of each pixel point in the fifth image is determined based on the blur radius of each pixel point in the fifth image and the coordinate difference value of each pixel point in the fifth image and the neighboring pixel point.
In an actual application process, the first weight parameter of each neighboring pixel point of each pixel point in the fifth image may be specifically determined according to the following formula (9):
w c ij =(0.5+0.5tanh(α×(r-(Δx ij 2 +Δy ij 2 ) 1/2 )))/(r 2 +ε), (9)
wherein w is c ij A first weight parameter representing a neighborhood pixel j of a pixel i in the fifth image, tanh representing a hyperbolic tangent function, r representing a blur radius of the pixel i in the fifth image, Δx ij Representing the coordinate difference value delta y of the pixel point i and the neighborhood pixel point j in the horizontal direction in the fifth image ij And the coordinate difference value of the pixel point i and the neighborhood pixel point j in the vertical direction in the fifth image is represented, and alpha and epsilon are super parameters.
According to the embodiment of the application, in the process of determining the first weight parameter of the fifth image according to the blur radius of each pixel point in the fifth image, the neighborhood range of each pixel point in the fifth image is determined according to the blur radius of each pixel point in the fifth image; and determining a first weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point, the blur radius and the coordinate value of the neighborhood pixel point in the fifth image. Thus, the accuracy of the first weight parameter determination is ensured, and the accuracy of the subsequent fourth image determination based on the first weight parameter is further ensured.
In the embodiment of the present application, as shown in fig. 6, the above S108d may specifically include the following S122 and S124:
s122: and determining the neighborhood range of each pixel point in the fifth image according to the blur radius of each pixel point in the fifth image.
The blur radius is used for controlling a neighborhood range for performing blur processing on a neighborhood pixel point of each pixel point in the fifth image.
S124: and determining a second weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point in the fifth image, the blur radius, the blur symbol, the coordinate value of the neighborhood pixel point and the state coordinate value of the third image.
Specifically, in the image processing method provided by the embodiment of the application, after the fifth image is obtained, each pixel point in the fifth image is traversed, the coordinate difference value of each pixel point in the fifth image and the neighborhood pixel point in the horizontal direction and the vertical direction is determined according to the coordinate value of each pixel point in the fifth image and the neighborhood pixel point, and further, the second weight parameter of each neighborhood pixel point in the fifth image is determined based on the blur radius, the blur symbol, the state coordinate value of the third image and the coordinate difference value of each pixel point in the fifth image and the neighborhood pixel point.
In an actual application process, the following formula (10) may be specifically used to determine the second weight parameter of each neighboring pixel point of each pixel point in the fifth image:
w s ij =(0.5+0.5tanh(α×(r-((Δx ij -srX s i ) 2 +(Δy ij -srY s i ) 2 ) 1/2 )))/(r 2 +ε),(10)
wherein w is s ij A second weight parameter representing a neighborhood pixel j of a pixel i in the fifth image, tanh representing a hyperbolic tangent function, r representing a blur radius of the pixel i in the fifth image, s representing a blur sign of the pixel i in the fifth image, Δx ij Representing the coordinate difference value delta y of the pixel point i and the neighborhood pixel point j in the horizontal direction in the fifth image ij Representing coordinate difference value X of pixel point i and adjacent pixel point j in vertical direction in fifth image s i An abscissa value, Y, representing a state coordinate value of a pixel point i in the fifth image corresponding to the third image s i And the ordinate values representing the state coordinate values of the pixel point i in the fifth image corresponding to the third image, and alpha and epsilon are super-parameters.
In the above embodiment of the present application, in the process of determining the second weight parameter of the fifth image according to the blur radius and the blur sign of each pixel point in the third image and the fifth image, the neighborhood range of each pixel point in the fifth image is determined according to the blur radius of each pixel point in the fifth image; and determining a second weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point in the fifth image, the blur radius, the blur symbol, the coordinate value of the neighborhood pixel point and the state coordinate value of the third image. Thus, the accuracy of the second weight parameter determination is ensured, and the accuracy of the subsequent fourth image determination based on the second weight parameter is further ensured.
In the embodiment of the present application, as shown in fig. 7, the above S108e may specifically include the following S126 to S134:
s126: accumulating the first weight parameters of the neighborhood pixel points of each pixel point in the fifth image, or accumulating the products of the first weight parameters and the second weight parameters of the neighborhood pixel points of each pixel point in the fifth image to obtain a first weight matrix.
Specifically, in the image processing method provided by the embodiment of the application, a weight accumulation matrix is initialized, the size of the weight accumulation matrix is the same as the size of the first image, and the weight accumulation matrix is an all-zero matrix. On the basis, under the condition that the aperture erosion and the dark angle effect of the camera are simulated through the electronic equipment, accumulating the first weight parameters of the neighborhood pixel points of each pixel point in the fifth image to the corresponding positions in the weight accumulation matrix to obtain a first weight matrix. Or under the condition that the caliber erosion effect of the camera is simulated only through the electronic equipment, accumulating the product value of the first weight parameter and the second weight parameter of the neighborhood pixel point of each pixel point in the fifth image to the corresponding position in the weight accumulation matrix to obtain a first weight matrix.
In the practical application process, under the condition that the aperture erosion and the vignetting effect of the camera are simulated by the electronic equipment, the first weight matrix can be specifically determined by the following formula (11):
W j =W j + w c ij , (11)
wherein W is j Representing a first weight matrix, w c ij And a first weight parameter representing a neighborhood pixel j of the pixel i in the fifth image.
Further, in the case of simulating the caliber erosion effect of the camera only by the electronic device, the first weight matrix may be specifically determined by the following formula (12):
W j =W j + w c ij ×w s ij , (12)
wherein W is j Representing a first weight matrix, w c ij First weight parameter, w, representing neighborhood pixel j of pixel i in fifth image s ij And a second weight parameter representing a neighborhood pixel j of the pixel i in the fifth image.
S128: and accumulating the products of the pixel value of each pixel point in the fifth image, the first weight parameter of the neighborhood pixel point and the second weight parameter to obtain a second weight matrix.
Specifically, in the image processing method provided by the embodiment of the application, a color accumulation matrix is initialized, the size of the color accumulation matrix is the same as the size of the first image, and the color accumulation matrix is an all-zero matrix. On the basis, the product value of the pixel value of each pixel point in the fifth image and the first weight parameter and the second weight parameter of the neighborhood pixel point of each pixel point in the fifth image are accumulated to the corresponding position in the color accumulation matrix, so that a second weight matrix is obtained.
In a practical application process, the above second weight matrix may be specifically determined by the following formula (13):
C j =C j +w c ij ×w s ij ×I’ i , (13)
wherein C is j Representing a second weight matrix, w c ij First weight parameter, w, representing neighborhood pixel j of pixel i in fifth image s ij A second weight parameter, I ', representing a neighborhood pixel j of pixel I in the fifth image' i Representing the pixel value of pixel i in the fifth image.
S130: and performing point-to-point division on the first weight matrix and the second weight matrix to obtain a first rendering image.
Specifically, in the image processing method provided by the embodiment of the present application, after the first weight matrix and the second weight matrix are obtained, the first weight matrix and the second weight matrix are divided point by point, so as to obtain a first rendered image in a linear color space.
S132: and performing second transformation processing on the first rendered image according to a second transformation algorithm to obtain a second rendered image.
Wherein the second transform process is an inverse transform process of the first transform process. The first conversion process may be specifically a gamma conversion process, and is not particularly limited herein.
Specifically, in the image processing method provided by the embodiment of the present application, after obtaining the first rendered image in the linear color space, based on the color space conversion mode adopted when determining the fifth image, the first rendered image is subjected to the inverse transformation process of the first transformation process, that is, the second transformation process, according to the second transformation algorithm, so as to convert the rendering result in the linear color space into the rendering result in the original color space, and obtain the second rendered image in the original color space.
S134: and cutting the boundary of the second rendering image to obtain a fourth image.
Specifically, in the image processing method provided by the embodiment of the present application, after the second rendered image is obtained, based on the result of performing boundary copy filling on the first image, the boundary of the second rendered image is cut, so as to restore the size of the second rendered image to the original size of the first image, thereby obtaining a fourth image.
According to the embodiment of the application, in the process of determining the fourth image according to the first weight parameter and the second weight parameter, the first weight parameter of the neighborhood pixel point of each pixel point in the fifth image is accumulated, or the product of the first weight parameter and the second weight parameter of the neighborhood pixel point of each pixel point in the fifth image is accumulated, so as to obtain a first weight matrix; accumulating the products of the pixel value of each pixel point in the fifth image, the first weight parameter of the neighborhood pixel point and the second weight parameter to obtain a second weight matrix; dividing the first weight matrix and the second weight matrix point by point to obtain a first rendered image; performing second transformation processing on the first rendered image according to a second transformation algorithm to obtain a second rendered image, wherein the second transformation processing is inverse transformation processing of the first transformation processing; and cutting the boundary of the second rendering image to obtain a fourth image. Therefore, the dark angle and caliber erosion phenomenon generated during shooting of the camera can be simulated, so that the real camera characteristic can be simulated through the electronic equipment, and the shooting authenticity of the simulated camera is improved.
In summary, the image processing method provided by the embodiment of the application integrates the physical and optical principles into the algorithm, generates the state graph according to the camera parameters, realizes the visibility detection of the light path in the rendering process, simulates the effects of real and controllable caliber erosion and dark angle, and improves the authenticity of shooting by the simulation camera of the electronic equipment. On the basis, aiming at the mode of simulating camera shooting by a traditional physical renderer and a nerve renderer, the traditional physical renderer is replaced by the image processing method provided by the embodiment of the application, the state coordinate graph of the input image is added at the input end of the nerve network in the nerve renderer, and then the image processing model for simulating the aperture erosion and the vignetting effect of the camera is obtained simply and conveniently through retraining.
According to the image processing method provided by the embodiment of the application, the execution subject can be an image processing device. In the embodiment of the present application, the image processing apparatus provided in the embodiment of the present application is described by taking the image processing apparatus executing the above image processing method as an example.
As shown in fig. 8, an embodiment of the present application provides an image processing apparatus 900, which may include an acquisition unit 902 and a processing unit 904 described below.
The acquiring unit 902 is configured to acquire a first image, a second image, and a first parameter of a camera of the electronic device, where the first image is acquired by the camera, and the second image is a parallax map of the first image;
a processing unit 904, configured to determine a two-dimensional coordinate graph according to a size parameter of the first image;
the processing unit 904 is further configured to determine a third image according to the two-dimensional coordinate graph and the first parameter, where the third image is a state coordinate graph of the first image;
the processing unit 904 is further configured to perform rendering processing on the first image according to the second image, the third image, and the second parameter of the camera, to obtain a fourth image.
The image processing device 900 provided by the embodiment of the application acquires a first image, a second image and a first parameter of a camera of an electronic device, wherein the first image is acquired by the camera, and the second image is a parallax image of the first image; determining a two-dimensional coordinate graph according to the size parameter of the first image; determining a third image according to the two-dimensional coordinate graph and the first parameter, wherein the third image is a state coordinate graph of the first image; and rendering the first image according to the second image, the third image and the second parameter of the camera to obtain a fourth image. Through the image processing apparatus 900, a state coordinate graph of the first image is obtained by combining the first parameter of the camera and the first image, and then rendering processing is performed on the first image by combining the state coordinate graph of the first image, the second parameter of the camera and the disparity map of the first image, so as to obtain a fourth image. Therefore, the dark angle and caliber erosion phenomenon generated during shooting of the camera can be simulated, so that the real camera characteristic can be simulated through the electronic equipment, and the reality of shooting by a camera simulation camera of the electronic equipment is improved.
In the embodiment of the present application, the processing unit 904 is specifically configured to: performing first transformation processing on the first image to obtain a fifth image; determining a blur radius and a blur sign of each pixel point in the fifth image according to the second image and the second parameter; determining a first weight parameter of the fifth image according to the blur radius of each pixel point in the fifth image; determining a second weight parameter of the fifth image according to the blur radius and the blur sign of each pixel point in the third image and the fifth image; and determining a fourth image according to the first weight parameter and the second weight parameter.
According to the embodiment of the application, in the process of rendering the first image according to the second image, the third image and the second parameter of the camera to obtain the fourth image, the first image is subjected to first transformation to obtain the fifth image; determining a blur radius and a blur sign of each pixel point in the fifth image according to the second image and the second parameter; determining a first weight parameter of the fifth image according to the blur radius of each pixel point in the fifth image; determining a second weight parameter of the fifth image according to the blur radius and the blur sign of each pixel point in the third image and the fifth image; and determining a fourth image according to the first weight parameter and the second weight parameter. Therefore, the dark angle and caliber erosion phenomenon generated during shooting of the camera can be simulated, so that the real camera characteristic can be simulated through the electronic equipment, and the shooting authenticity of the simulated camera is improved.
In the embodiment of the present application, the processing unit 904 is specifically configured to: performing boundary filling on the first image; and performing first transformation processing on the first image after the boundary filling according to a first transformation algorithm to obtain a fifth image.
According to the embodiment of the application, in the process of performing first transformation processing on the first image to obtain the fifth image, boundary filling is performed on the first image; and performing first transformation processing on the first image after the boundary filling according to a first transformation algorithm to obtain a fifth image. In this way, the fifth image is obtained by performing boundary copy filling and first transformation processing on the first image, so that the accuracy of determination of the fifth image is ensured, and the authenticity of the follow-up camera shooting effect based on the fifth image simulation is further improved.
In the embodiment of the present application, the second parameters include an ambiguity parameter and a focus parallax parameter, and the processing unit 904 is specifically configured to: determining the blur radius of each pixel point in the fifth image according to the pixel value, the blur degree parameter and the focusing parallax parameter of each pixel point in the second image; and determining a fuzzy symbol of each pixel point in the fifth image according to the comparison result of the pixel value of each pixel point in the second image and the focusing parallax parameter.
According to the embodiment of the application, the second parameters comprise the ambiguity parameters and the focus parallax parameters, and in the process of determining the ambiguity radius and the ambiguity sign of each pixel point in the fifth image according to the second image and the second parameters, the ambiguity radius of each pixel point in the fifth image is determined according to the pixel value, the ambiguity parameters and the focus parallax parameters of each pixel point in the second image; and determining a fuzzy symbol of each pixel point in the fifth image according to the comparison result of the pixel value of each pixel point in the second image and the focusing parallax parameter. Thus, the accuracy of determining the blur radius and the blur symbol is ensured, and the accuracy of determining the subsequent fourth image is further ensured.
In the embodiment of the present application, the processing unit 904 is specifically configured to: determining a neighborhood range of each pixel point in the fifth image according to the fuzzy radius of each pixel point in the fifth image; and determining a first weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point, the blur radius and the coordinate value of the neighborhood pixel point in the fifth image.
According to the embodiment of the application, in the process of determining the first weight parameter of the fifth image according to the blur radius of each pixel point in the fifth image, the neighborhood range of each pixel point in the fifth image is determined according to the blur radius of each pixel point in the fifth image; and determining a first weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point, the blur radius and the coordinate value of the neighborhood pixel point in the fifth image. Thus, the accuracy of the first weight parameter determination is ensured, and the accuracy of the subsequent fourth image determination based on the first weight parameter is further ensured.
In the embodiment of the present application, the processing unit 904 is specifically configured to: determining a neighborhood range of each pixel point in the fifth image according to the fuzzy radius of each pixel point in the fifth image; and determining a second weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point in the fifth image, the blur radius, the blur symbol, the coordinate value of the neighborhood pixel point and the state coordinate value of the third image.
In the above embodiment of the present application, in the process of determining the second weight parameter of the fifth image according to the blur radius and the blur sign of each pixel point in the third image and the fifth image, the neighborhood range of each pixel point in the fifth image is determined according to the blur radius of each pixel point in the fifth image; and determining a second weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point in the fifth image, the blur radius, the blur symbol, the coordinate value of the neighborhood pixel point and the state coordinate value of the third image. Thus, the accuracy of the second weight parameter determination is ensured, and the accuracy of the subsequent fourth image determination based on the second weight parameter is further ensured.
In the embodiment of the present application, the processing unit 904 is specifically configured to: accumulating the first weight parameters of the neighborhood pixel points of each pixel point in the fifth image or accumulating the products of the first weight parameters and the second weight parameters of the neighborhood pixel points of each pixel point in the fifth image to obtain a first weight matrix; accumulating the products of the pixel value of each pixel point in the fifth image, the first weight parameter of the neighborhood pixel point and the second weight parameter to obtain a second weight matrix; dividing the first weight matrix and the second weight matrix point by point to obtain a first rendered image; performing second transformation processing on the first rendered image according to a second transformation algorithm to obtain a second rendered image, wherein the second transformation processing is inverse transformation processing of the first transformation processing; and cutting the boundary of the second rendering image to obtain a fourth image.
According to the embodiment of the application, in the process of determining the fourth image according to the first weight parameter and the second weight parameter, the first weight parameter of the neighborhood pixel point of each pixel point in the fifth image is accumulated, or the product of the first weight parameter and the second weight parameter of the neighborhood pixel point of each pixel point in the fifth image is accumulated, so as to obtain a first weight matrix; accumulating the products of the pixel value of each pixel point in the fifth image, the first weight parameter of the neighborhood pixel point and the second weight parameter to obtain a second weight matrix; dividing the first weight matrix and the second weight matrix point by point to obtain a first rendered image; performing second transformation processing on the first rendered image according to a second transformation algorithm to obtain a second rendered image, wherein the second transformation processing is inverse transformation processing of the first transformation processing; and cutting the boundary of the second rendering image to obtain a fourth image. Therefore, the dark angle and caliber erosion phenomenon generated during shooting of the camera can be simulated, so that the real camera characteristic can be simulated through the electronic equipment, and the shooting authenticity of the simulated camera is improved.
The image processing apparatus 900 in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus 900 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image processing apparatus 900 provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 7, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 9, the embodiment of the present application further provides an electronic device 1000, including a processor 1002 and a memory 1004, where the memory 1004 stores a program or an instruction that can be executed on the processor 1002, and the program or the instruction implements each step of the embodiment of the image processing method when executed by the processor 1002, and can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: radio frequency unit 1101, network module 1102, audio output unit 1103, input unit 1104, sensor 1105, display unit 1106, user input unit 1107, interface unit 1108, memory 1109, and processor 1110.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1110 by a power management system, such as to perform functions such as managing charging, discharging, and power consumption by the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 1110 is configured to obtain a first image, a second image, and a first parameter of a camera of the electronic device, where the first image is acquired by the camera, and the second image is a disparity map of the first image.
The processor 1110 is further configured to determine a two-dimensional coordinate graph according to the size parameter of the first image.
The processor 1110 is further configured to determine a third image according to the two-dimensional coordinate graph and the first parameter, where the third image is a state coordinate graph of the first image.
The processor 1110 is further configured to perform rendering processing on the first image according to the second image, the third image, and the second parameter of the camera, so as to obtain a fourth image.
In the embodiment of the application, a first image, a second image and a first parameter of a camera of electronic equipment are acquired, wherein the first image is acquired by the camera, and the second image is a parallax image of the first image; determining a two-dimensional coordinate graph according to the size parameter of the first image; determining a third image according to the two-dimensional coordinate graph and the first parameter, wherein the third image is a state coordinate graph of the first image; and rendering the first image according to the second image, the third image and the second parameter of the camera to obtain a fourth image. In the embodiment of the application, a state coordinate graph of the first image is obtained by combining the first parameter of the camera and the first image, and then rendering processing is performed on the first image by combining the state coordinate graph of the first image, the second parameter of the camera and the parallax graph of the first image to obtain a fourth image. Therefore, the dark angle and caliber erosion phenomenon generated during shooting of the camera can be simulated, so that the real camera characteristic can be simulated through the electronic equipment, and the reality of shooting by a camera simulation camera of the electronic equipment is improved.
Optionally, the processor 1110 is specifically configured to: performing first transformation processing on the first image to obtain a fifth image; determining a blur radius and a blur sign of each pixel point in the fifth image according to the second image and the second parameter; determining a first weight parameter of the fifth image according to the blur radius of each pixel point in the fifth image; determining a second weight parameter of the fifth image according to the blur radius and the blur sign of each pixel point in the third image and the fifth image; and determining a fourth image according to the first weight parameter and the second weight parameter.
According to the embodiment of the application, in the process of rendering the first image according to the second image, the third image and the second parameter of the camera to obtain the fourth image, the first image is subjected to first transformation to obtain the fifth image; determining a blur radius and a blur sign of each pixel point in the fifth image according to the second image and the second parameter; determining a first weight parameter of the fifth image according to the blur radius of each pixel point in the fifth image; determining a second weight parameter of the fifth image according to the blur radius and the blur sign of each pixel point in the third image and the fifth image; and determining a fourth image according to the first weight parameter and the second weight parameter. Therefore, the dark angle and caliber erosion phenomenon generated during shooting of the camera can be simulated, so that the real camera characteristic can be simulated through the electronic equipment, and the shooting authenticity of the simulated camera is improved.
Optionally, the processor 1110 is specifically configured to: performing boundary filling on the first image; and performing first transformation processing on the first image after the boundary filling according to a first transformation algorithm to obtain a fifth image.
According to the embodiment of the application, in the process of performing first transformation processing on the first image to obtain the fifth image, boundary filling is performed on the first image; and performing first transformation processing on the first image after the boundary filling according to a first transformation algorithm to obtain a fifth image. In this way, the fifth image is obtained by performing boundary copy filling and first transformation processing on the first image, so that the accuracy of determination of the fifth image is ensured, and the authenticity of the follow-up camera shooting effect based on the fifth image simulation is further improved.
Optionally, the second parameters include an ambiguity parameter and a focus parallax parameter, and the processor 1110 is specifically configured to: determining the blur radius of each pixel point in the fifth image according to the pixel value, the blur degree parameter and the focusing parallax parameter of each pixel point in the second image; and determining a fuzzy symbol of each pixel point in the fifth image according to the comparison result of the pixel value of each pixel point in the second image and the focusing parallax parameter.
According to the embodiment of the application, the second parameters comprise the ambiguity parameters and the focus parallax parameters, and in the process of determining the ambiguity radius and the ambiguity sign of each pixel point in the fifth image according to the second image and the second parameters, the ambiguity radius of each pixel point in the fifth image is determined according to the pixel value, the ambiguity parameters and the focus parallax parameters of each pixel point in the second image; and determining a fuzzy symbol of each pixel point in the fifth image according to the comparison result of the pixel value of each pixel point in the second image and the focusing parallax parameter. Thus, the accuracy of determining the blur radius and the blur symbol is ensured, and the accuracy of determining the subsequent fourth image is further ensured.
Optionally, the processor 1110 is specifically configured to: determining a neighborhood range of each pixel point in the fifth image according to the fuzzy radius of each pixel point in the fifth image; and determining a first weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point, the blur radius and the coordinate value of the neighborhood pixel point in the fifth image.
According to the embodiment of the application, in the process of determining the first weight parameter of the fifth image according to the blur radius of each pixel point in the fifth image, the neighborhood range of each pixel point in the fifth image is determined according to the blur radius of each pixel point in the fifth image; and determining a first weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point, the blur radius and the coordinate value of the neighborhood pixel point in the fifth image. Thus, the accuracy of the first weight parameter determination is ensured, and the accuracy of the subsequent fourth image determination based on the first weight parameter is further ensured.
Optionally, the processor 1110 is specifically configured to: determining a neighborhood range of each pixel point in the fifth image according to the fuzzy radius of each pixel point in the fifth image; and determining a second weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point in the fifth image, the blur radius, the blur symbol, the coordinate value of the neighborhood pixel point and the state coordinate value of the third image.
In the above embodiment of the present application, in the process of determining the second weight parameter of the fifth image according to the blur radius and the blur sign of each pixel point in the third image and the fifth image, the neighborhood range of each pixel point in the fifth image is determined according to the blur radius of each pixel point in the fifth image; and determining a second weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point in the fifth image, the blur radius, the blur symbol, the coordinate value of the neighborhood pixel point and the state coordinate value of the third image. Thus, the accuracy of the second weight parameter determination is ensured, and the accuracy of the subsequent fourth image determination based on the second weight parameter is further ensured.
Optionally, the processor 1110 is specifically configured to: accumulating the first weight parameters of the neighborhood pixel points of each pixel point in the fifth image or accumulating the products of the first weight parameters and the second weight parameters of the neighborhood pixel points of each pixel point in the fifth image to obtain a first weight matrix; accumulating the products of the pixel value of each pixel point in the fifth image, the first weight parameter of the neighborhood pixel point and the second weight parameter to obtain a second weight matrix; dividing the first weight matrix and the second weight matrix point by point to obtain a first rendered image; performing second transformation processing on the first rendered image according to a second transformation algorithm to obtain a second rendered image, wherein the second transformation processing is inverse transformation processing of the first transformation processing; and cutting the boundary of the second rendering image to obtain a fourth image.
According to the embodiment of the application, in the process of determining the fourth image according to the first weight parameter and the second weight parameter, the first weight parameter of the neighborhood pixel point of each pixel point in the fifth image is accumulated, or the product of the first weight parameter and the second weight parameter of the neighborhood pixel point of each pixel point in the fifth image is accumulated, so as to obtain a first weight matrix; accumulating the products of the pixel value of each pixel point in the fifth image, the first weight parameter of the neighborhood pixel point and the second weight parameter to obtain a second weight matrix; dividing the first weight matrix and the second weight matrix point by point to obtain a first rendered image; performing second transformation processing on the first rendered image according to a second transformation algorithm to obtain a second rendered image, wherein the second transformation processing is inverse transformation processing of the first transformation processing; and cutting the boundary of the second rendering image to obtain a fourth image. Therefore, the dark angle and caliber erosion phenomenon generated during shooting of the camera can be simulated, so that the real camera characteristic can be simulated through the electronic equipment, and the shooting authenticity of the simulated camera is improved.
It should be appreciated that in embodiments of the present application, the input unit 1104 may include a graphics processor (Graphics Processing Unit, GPU) 11041 and a microphone 11042, the graphics processor 11041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes at least one of a touch panel 11071 and other input devices 11072. The touch panel 11071 is also referred to as a touch screen. The touch panel 11071 may include two parts, a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1109 may be used to store software programs as well as various data. The memory 1109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1109 may include volatile memory or nonvolatile memory, or the memory 1109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1110 may include one or more processing units; optionally, the processor 1110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1110.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, implements each process of the above image processing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is provided here.
The processor is a processor in the electronic device in the above embodiment. Readable storage media include computer readable storage media such as computer readable memory ROM, random access memory RAM, magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the embodiment of the image processing method can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-described image processing method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in part in the form of a computer software product stored on a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (10)

1. An image processing method, the method comprising:
acquiring a first image, a second image and a first parameter of a camera of the electronic equipment, wherein the first image is acquired by the camera, and the second image is a parallax image of the first image;
determining a two-dimensional coordinate graph according to the size parameter of the first image;
determining a third image according to the two-dimensional coordinate graph and the first parameter, wherein the third image is a state coordinate graph of the first image;
and rendering the first image according to the second image, the third image and the second parameter of the camera to obtain a fourth image.
2. The image processing method according to claim 1, wherein the rendering the first image according to the second image, the third image, and the second parameter of the camera to obtain a fourth image includes:
performing first transformation processing on the first image to obtain a fifth image;
determining a blur radius and a blur sign of each pixel point in the fifth image according to the second image and the second parameter;
Determining a first weight parameter of the fifth image according to the blur radius of each pixel point in the fifth image;
determining a second weight parameter of the fifth image according to the blur radius and the blur sign of each pixel point in the third image and the fifth image;
and determining the fourth image according to the first weight parameter and the second weight parameter.
3. The image processing method according to claim 2, wherein the second parameters include a blur degree parameter and a focus parallax parameter, and the determining a blur radius and a blur sign of each pixel point in the fifth image according to the second image and the second parameters includes:
determining the blur radius of each pixel point in the fifth image according to the pixel value of each pixel point in the second image, the blur degree parameter and the focusing parallax parameter;
determining a fuzzy symbol of each pixel point in the fifth image according to the comparison result of the pixel value of each pixel point in the second image and the focusing parallax parameter;
the determining a first weight parameter of the fifth image according to the blur radius of each pixel point in the fifth image includes:
Determining a neighborhood range of each pixel point in the fifth image according to the blur radius of each pixel point in the fifth image;
and determining a first weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point, the blur radius and the coordinate value of the neighborhood pixel point in the fifth image.
4. The image processing method according to claim 2, wherein the determining the second weight parameter of the fifth image according to the blur radius and the blur sign of each pixel in the third image and the fifth image includes:
determining a neighborhood range of each pixel point in the fifth image according to the blur radius of each pixel point in the fifth image;
and determining a second weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value, the blur radius, the blur symbol, the coordinate value of the neighborhood pixel point and the state coordinate value of the third image.
5. The image processing method according to any one of claims 2 to 4, wherein the determining the fourth image from the first weight parameter and the second weight parameter includes:
Accumulating the first weight parameters of the neighborhood pixel points of each pixel point in the fifth image or accumulating the products of the first weight parameters and the second weight parameters of the neighborhood pixel points of each pixel point in the fifth image to obtain a first weight matrix;
accumulating the products of the pixel value of each pixel point in the fifth image, the first weight parameter of the neighborhood pixel point and the second weight parameter to obtain a second weight matrix;
dividing the first weight matrix and the second weight matrix point by point to obtain a first rendering image;
performing second transformation processing on the first rendered image according to a second transformation algorithm to obtain a second rendered image, wherein the second transformation processing is inverse transformation processing of the first transformation processing;
and cutting the boundary of the second rendering image to obtain the fourth image.
6. An image processing apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition unit, a first image acquisition unit and a second image acquisition unit, wherein the acquisition unit is used for acquiring a first image, a second image and a first parameter of a camera of the electronic device, the first image is acquired by the camera, and the second image is a parallax image of the first image;
The processing unit is used for determining a two-dimensional coordinate graph according to the size parameter of the first image;
the processing unit is further configured to determine a third image according to the two-dimensional coordinate graph and the first parameter, where the third image is a state coordinate graph of the first image;
the processing unit is further configured to perform rendering processing on the first image according to the second image, the third image, and the second parameter of the camera, so as to obtain a fourth image.
7. The image processing device according to claim 6, wherein the processing unit is specifically configured to:
performing first transformation processing on the first image to obtain a fifth image;
determining a blur radius and a blur sign of each pixel point in the fifth image according to the second image and the second parameter;
determining a first weight parameter of the fifth image according to the blur radius of each pixel point in the fifth image;
determining a second weight parameter of the fifth image according to the blur radius and the blur sign of each pixel point in the third image and the fifth image;
and determining the fourth image according to the first weight parameter and the second weight parameter.
8. The image processing apparatus according to claim 7, wherein the second parameter includes an ambiguity parameter and a focus parallax parameter, the processing unit being specifically configured to:
determining the blur radius of each pixel point in the fifth image according to the pixel value of each pixel point in the second image, the blur degree parameter and the focusing parallax parameter;
determining a fuzzy symbol of each pixel point in the fifth image according to the comparison result of the pixel value of each pixel point in the second image and the focusing parallax parameter;
determining a neighborhood range of each pixel point in the fifth image according to the blur radius of each pixel point in the fifth image;
and determining a first weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value of each pixel point, the blur radius and the coordinate value of the neighborhood pixel point in the fifth image.
9. The image processing device according to claim 7, wherein the processing unit is specifically configured to:
determining a neighborhood range of each pixel point in the fifth image according to the blur radius of each pixel point in the fifth image;
And determining a second weight parameter of the neighborhood pixel point of each pixel point in the fifth image according to the coordinate value, the blur radius, the blur symbol, the coordinate value of the neighborhood pixel point and the state coordinate value of the third image.
10. The image processing apparatus according to any one of claims 7 to 9, wherein the processing unit is specifically configured to:
accumulating the first weight parameters of the neighborhood pixel points of each pixel point in the fifth image or accumulating the products of the first weight parameters and the second weight parameters of the neighborhood pixel points of each pixel point in the fifth image to obtain a first weight matrix;
accumulating the products of the pixel value of each pixel point in the fifth image, the first weight parameter of the neighborhood pixel point and the second weight parameter to obtain a second weight matrix;
dividing the first weight matrix and the second weight matrix point by point to obtain a first rendering image;
performing second transformation processing on the first rendered image according to a second transformation algorithm to obtain a second rendered image, wherein the second transformation processing is inverse transformation processing of the first transformation processing;
And cutting the boundary of the second rendering image to obtain the fourth image.
CN202311090137.6A 2023-08-28 2023-08-28 Image processing method and device Pending CN117135445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311090137.6A CN117135445A (en) 2023-08-28 2023-08-28 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311090137.6A CN117135445A (en) 2023-08-28 2023-08-28 Image processing method and device

Publications (1)

Publication Number Publication Date
CN117135445A true CN117135445A (en) 2023-11-28

Family

ID=88850417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311090137.6A Pending CN117135445A (en) 2023-08-28 2023-08-28 Image processing method and device

Country Status (1)

Country Link
CN (1) CN117135445A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119131016A (en) * 2024-11-11 2024-12-13 渭南大东印刷包装机械有限公司 Online detection method for damage of printing scraper

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119131016A (en) * 2024-11-11 2024-12-13 渭南大东印刷包装机械有限公司 Online detection method for damage of printing scraper

Similar Documents

Publication Publication Date Title
US11170210B2 (en) Gesture identification, control, and neural network training methods and apparatuses, and electronic devices
US10970821B2 (en) Image blurring methods and apparatuses, storage media, and electronic devices
CN108230333B (en) Image processing method, image processing apparatus, computer program, storage medium, and electronic device
CN114445315A (en) Image quality enhancement method and electronic device
CN114390201A (en) Focusing method and device thereof
CN111951192A (en) Shot image processing method and shooting equipment
CN117135445A (en) Image processing method and device
CN114493988A (en) Image blurring method, image blurring device and terminal equipment
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN114037740B (en) Image data stream processing method and device and electronic equipment
CN114881841A (en) Image generation method and device
CN112702528B (en) Video anti-shake method and device and electronic equipment
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN113592922A (en) Image registration processing method and device
CN113628259A (en) Image registration processing method and device
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
WO2022271499A1 (en) Methods and systems for depth estimation using fisheye cameras
CN117201941A (en) Image processing method, device, electronic equipment and readable storage medium
CN115589456B (en) Shooting method and device
CN113850739B (en) Image processing method and device
CN117156285A (en) Image processing method and device
CN117750195A (en) Image processing method, device, readable storage medium and electronic equipment
CN117793513A (en) Video processing method and device
CN117541507A (en) Image data pair establishing method and device, electronic equipment and readable storage medium
CN116797886A (en) Model training method, image processing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination