Nothing Special   »   [go: up one dir, main page]

CN112308796A - Image processing method and device and processor - Google Patents

Image processing method and device and processor Download PDF

Info

Publication number
CN112308796A
CN112308796A CN202011182890.4A CN202011182890A CN112308796A CN 112308796 A CN112308796 A CN 112308796A CN 202011182890 A CN202011182890 A CN 202011182890A CN 112308796 A CN112308796 A CN 112308796A
Authority
CN
China
Prior art keywords
image
gray scale
frame image
determining
curve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011182890.4A
Other languages
Chinese (zh)
Inventor
刘诣荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Wanxiang Electronics Technology Co Ltd
Original Assignee
Xian Wanxiang Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Wanxiang Electronics Technology Co Ltd filed Critical Xian Wanxiang Electronics Technology Co Ltd
Priority to CN202011182890.4A priority Critical patent/CN112308796A/en
Publication of CN112308796A publication Critical patent/CN112308796A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses an image processing method and device and a processor. Wherein, the method comprises the following steps: determining a first gray level image of a current frame image to be coded and a second gray level image of a reference frame image; calculating to obtain Euclidean distance between pixels of the first gray level image and pixels of the second gray level image; determining a noise area of a first gray level image of the current frame image according to the Euclidean distance; and replacing the noise area of the current frame image with the replacement area of the reference frame image to obtain the updated current frame image to be coded, wherein the replacement area is an area corresponding to the noise area in the reference frame image. The invention solves the technical problem that the coding code stream is increased due to the fact that the image coding is doped by noise and redundant coding is generated in the related technology.

Description

Image processing method and device and processor
Technical Field
The invention relates to the field of image processing, in particular to an image processing method and device and a processor.
Background
In video encoding and decoding transmission, because an image to be encoded is doped with transmission noise, quantization noise and the like in image acquisition, the doped noise is mostly random, and the position and the value of the doped noise are mostly unpredictable, an unchanging macro block between a current frame and a reference frame is changed into a changing macro block in video interframe encoding, redundant encoding is generated, a code stream of a frame is increased, and how to perform denoising processing on the current image frame to be transmitted is an effective means for reducing the code stream.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device and a processor, which are used for at least solving the technical problem that coding code streams are increased due to redundant coding caused by noise doping of image coding in the related technology.
According to an aspect of an embodiment of the present invention, there is provided an image processing method including: determining a first gray level image of a current frame image to be coded and a second gray level image of a reference frame image; calculating to obtain Euclidean distance between pixels of the first gray level image and the second gray level image; determining a noise area of a first gray level image of the current frame image according to the Euclidean distance; and replacing the noise area of the current frame image with the replacement area of the reference frame image to obtain an updated current frame image to be coded, wherein the replacement area is an area corresponding to the noise area in the reference frame image.
Optionally, determining the first grayscale image of the current frame image to be encoded and the second grayscale image of the reference frame image includes: determining the difference value of the gray levels of pixel points of the current frame image and the reference frame image in Y/U/V three channels respectively, wherein the current frame image and the reference frame image are YUV images; determining the maximum difference and the minimum difference of the differences of the Y/U/V three channels; under the condition that the difference between the maximum difference value and the minimum difference value meets a preset condition, taking the gray value of the channel corresponding to the maximum difference value as the gray value of the pixel coordinates of the current frame image and the reference frame image; and under the condition that the difference between the maximum difference value and the minimum difference value does not meet a preset condition, taking the gray value of the channel corresponding to the minimum difference value as the gray value of the pixel coordinates of the current frame image and the reference frame image to obtain the first gray image and the second gray image.
Optionally, determining the noise region of the first gray scale image of the current frame image according to the euclidean distance includes: under the condition that the Euclidean distance is smaller than or equal to a first preset distance, determining a pixel corresponding to the Euclidean distance as a noise region; determining that a pixel corresponding to the Euclidean distance is a suspected noise area when the Euclidean distance is greater than a first preset distance and less than or equal to a second preset distance; and under the condition that the Euclidean distance is greater than a second preset distance, determining a pixel corresponding to the Euclidean distance as an effective region.
Optionally, after determining that the pixel corresponding to the euclidean distance is the suspected noise area when the euclidean distance is greater than the first preset distance and is less than or equal to the second preset distance, the method further includes: according to the reference area of the reference frame image corresponding to the suspected noise area; determining a first gray scale change mean curve of the suspected noise area and a second gray scale change mean curve of the reference area; determining the change similarity of the first gray scale change mean value curve and the second gray scale change mean value curve; according to the change similarity, under the condition that the first gray scale change mean value curve is not similar to the second gray scale change mean value curve, determining the pixels of the suspected noise area as a noise area; and under the condition that the first gray scale change mean curve is similar to the second gray scale change mean curve, determining that the pixels of the suspected noise area are effective areas.
Optionally, determining the first gray-scale variation mean curve of the suspected noise area and the second gray-scale variation mean curve of the reference area includes: determining a first row direction gray scale mean curve and a first column direction gray scale mean curve of the suspected noise area, and a second row direction gray scale mean curve and a second column direction gray scale mean curve of the reference area, wherein the first gray scale change mean curve comprises the first row direction gray scale mean curve and the first column direction gray scale mean curve, and the second gray scale change mean curve comprises the second row direction gray scale mean curve and the second column direction gray scale mean curve; determining the change similarity of the first and second mean change curves comprises: determining a first change similarity according to the first row direction gray scale mean curve and the second row direction gray scale mean curve, and determining a second change similarity according to the first column direction gray scale mean curve and the second column direction gray scale mean curve, wherein the change similarity comprises the first change similarity and the second change similarity.
Optionally, determining that the pixel of the suspected noise area is a noise area when the first gray scale change mean curve is not similar to the second gray scale change mean curve according to the change similarity; determining that the pixels of the suspected noise area are effective areas when the first gray scale variation mean curve is similar to the second gray scale variation mean curve comprises: determining that the first gray scale change mean curve is not similar to the second gray scale change mean curve under the condition that the first change similarity does not reach a first similarity threshold or the second change similarity does not reach a second similarity threshold, wherein pixels of the suspected noise area are noise areas; and under the condition that the first change similarity reaches a first similarity threshold value and the second change similarity reaches a second similarity threshold value, determining that the first gray scale change mean curve is similar to the second gray scale change mean curve, and taking the pixels of the suspected noise area as an effective area.
Optionally, determining the change similarity between the first gray scale change mean curve and the second gray scale change mean curve includes: performing exponential transformation on the first gray scale variation mean value curve and the second gray scale variation mean value curve; and calculating the mean square error of the first gray scale change mean curve and the second gray scale change mean curve after the exponential transformation to obtain the change similarity.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus including: the first determining module is used for determining a first gray level image of a current frame image to be coded and a second gray level image of a reference frame image; the calculation module is used for calculating and obtaining the Euclidean distance between pixels of the first gray level image and the second gray level image; the second determining module is used for determining a noise area of the first gray level image of the current frame image according to the Euclidean distance; and the replacing module is used for replacing the noise area of the current frame image with the replacing area of the reference frame image to obtain the updated current frame image to be coded, wherein the replacing area is an area corresponding to the noise area in the reference frame image.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes to perform the image processing method according to any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided a computer storage medium, where the computer storage medium includes a stored program, and when the program runs, the apparatus on which the computer storage medium is located is controlled to execute any one of the image processing methods described above.
In the embodiment of the invention, a first gray image of a current frame image to be coded and a second gray image of a reference frame image are determined; calculating to obtain Euclidean distance between pixels of the first gray level image and pixels of the second gray level image; determining a noise area of a first gray level image of the current frame image according to the Euclidean distance; replacing the noise area of the current frame image with the replacement area of the reference frame image to obtain an updated current frame image to be encoded, wherein the replacement area is a corresponding area of the noise area in the reference frame image, determining the noise area in the current frame image by the Euclidean distance between pixel points of a first gray scale image of the current frame image and a second gray scale image of the reference frame image, and replacing the noise area of the current frame image with the replacement area of the reference frame image corresponding to the noise area to realize the denoising of the current frame image, avoid the encoding redundancy caused by the noise area of the current frame image, achieve the purposes of effectively denoising the current frame image and effectively simplifying the encoding of the current frame, thereby realizing the technical effect of reducing the encoding code stream of the current frame and further solving the problem that the image encoding in the related technology is doped by noise, redundant coding is generated, and the technical problem of increasing coded code streams is caused.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of an image processing method according to an embodiment of the invention;
FIG. 2-1 is a schematic diagram of noise point locations according to an embodiment of the present invention;
2-2 are diagrams of noise points sitting in a noise region of a current frame image according to an embodiment of the present invention;
FIGS. 2-3 are schematic diagrams of alternative regions of a reference frame image corresponding to noisy regions in accordance with embodiments of the present invention;
FIG. 3 is a flow diagram of image processing according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of an image processing method, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system, such as a set of computer-executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than presented herein.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, determining a first gray image of a current frame image to be coded and a second gray image of a reference frame image;
step S104, calculating to obtain Euclidean distance between pixels of the first gray level image and pixels of the second gray level image;
step S106, determining a noise area of a first gray level image of the current frame image according to the Euclidean distance;
and step S108, replacing the noise area of the current frame image with the replacement area of the reference frame image to obtain the updated current frame image to be coded, wherein the replacement area is an area corresponding to the noise area in the reference frame image.
Through the steps, a first gray image of a current frame image to be coded and a second gray image of a reference frame image are determined; calculating to obtain Euclidean distance between pixels of the first gray level image and pixels of the second gray level image; determining a noise area of a first gray level image of the current frame image according to the Euclidean distance; replacing the noise area of the current frame image with the replacement area of the reference frame image to obtain an updated current frame image to be encoded, wherein the replacement area is a corresponding area of the noise area in the reference frame image, determining the noise area in the current frame image by the Euclidean distance between pixel points of a first gray scale image of the current frame image and a second gray scale image of the reference frame image, and replacing the noise area of the current frame image with the replacement area of the reference frame image corresponding to the noise area to realize the denoising of the current frame image, avoid the encoding redundancy caused by the noise area of the current frame image, achieve the purposes of effectively denoising the current frame image and effectively simplifying the encoding of the current frame, thereby realizing the technical effect of reducing the encoding code stream of the current frame and further solving the problem that the image encoding in the related technology is doped by noise, redundant coding is generated, and the technical problem of increasing coded code streams is caused.
The current frame image may belong to an image stream, the object to be encoded may be an image stream, the image stream includes a plurality of frame images, a certain frame sequence is provided between the plurality of frame images, the plurality of frame images are encoded one by one according to the frame sequence during encoding, and the current frame image may be a currently encoded frame image.
The reference frame image is a frame image selected as a reference when encoding a current frame image, and the reference frame may be an original image in an image stream or an image processed from the original image of the image stream, and is used as a reference when encoding the current frame image, and is usually a previous frame image of the current frame. Since the image can be denoised in this embodiment, the reference frame image can be an image of a previous frame image of the current frame image after denoising.
When the image stream is coded, if the current frame image is the first frame image, the first frame image is used as the reference frame, and since the reference frame is the current frame image itself, the method is adopted to form contrast for denoising and the current frame image is directly coded, the current frame image can be a non-first frame image of the image stream.
Step S102, determining a first gray image of a current frame image to be coded and a second gray image of a reference frame image, wherein the first gray image and the second gray image can be maximum inter-class difference gray images, namely determining gray difference values of pixel points of the current frame image and the reference frame image respectively in Y/U/V three channels, determining a maximum difference value and a minimum difference value in the difference values, and taking a gray value of a channel corresponding to the maximum difference value as a gray value of pixel coordinates of the current frame image and the reference frame image under the condition that the maximum difference value meets a preset condition; and under the condition that the difference between the maximum difference value and the minimum difference value does not meet the preset condition, taking the gray value of the channel corresponding to the minimum difference value as the gray value of the pixel coordinates of the current frame image and the reference frame image to obtain a gray image.
Optionally, determining a first grayscale image of the current frame image to be encoded, and determining a second grayscale image of the reference frame image includes: determining the difference value of pixel point gray levels of a current frame image and a reference frame image in Y/U/V three channels respectively, wherein the current frame image and the reference frame image are YUV images; determining the maximum difference and the minimum difference of the differences of the Y/U/V three channels; under the condition that the difference between the maximum difference value and the minimum difference value meets a preset condition, taking the gray value of the channel corresponding to the maximum difference value as the gray value of the pixel coordinates of the current frame image and the reference frame image; and under the condition that the difference between the maximum difference value and the minimum difference value does not meet the preset condition, taking the gray value of the channel corresponding to the minimum difference value as the gray value of the pixel coordinates of the current frame image and the reference frame image to obtain a first gray image and a second gray image.
Specifically, cur represents a current frame image-related parameter, and ref represents a reference frame image-related parameter, as follows.
diffy(x,y)=|cury(x,y)-refy(x,y)|
diffu(x,y)=|curu(x,y)-refu(x,y)|
diffv(x,y)=|curv(x,y)-refv(x,y)|
diffy(x, Y) is the difference of the Y channels, diffu(x, y) is the difference of U channel,diffv(x, y) is the difference of the V channels, cury(x, Y) is the gray value of the pixel point with the coordinate (x, Y) of the pixel point of the current frame image in the Y channel, refy(x, Y) is the gray value cur of the pixel point with the coordinate (x, Y) of the pixel point of the reference frame image in the Y channelu(x, y) is the gray value of the pixel point with the coordinate (x, y) of the pixel point of the current frame image in the U channel, refu(x, y) is the gray value cur of the pixel point with the coordinate (x, y) of the pixel point of the reference frame image in the U channelv(x, y) is the gray value of the pixel point with the coordinate (x, y) of the pixel point of the current frame image in the V channel, refv(x, y) is the gray value of a pixel point with the coordinate (x, y) of the pixel point of the reference frame image in the V channel;
maxDiff(x,y)=max{diffy(x,y),diffu(x,y),diffv(x,y)}
minDiff(x,y)=min{diffy(x,y),diffu(x,y),diffv(x,y)}
maxDeff (x, Y) is the maximum difference of the differences between the three Y/U/V channels, minDiff (x, Y) is the minimum difference of the differences between the three Y/U/V channels,
then there is
Figure BDA0002750654700000071
Figure BDA0002750654700000072
The currgray image (x, y) is a first gray image of the current frame image, i.e. a maximum inter-class difference gray image, and the refgray image (x, y) is a second gray image of the reference frame image, i.e. a maximum inter-class difference gray image.
The maxDiff (x, y) -minDiff (x, y) > 2 is a preset condition, and else is not satisfied with the preset condition, namely the maxDiff (x, y) -minDiff (x, y) < 2.
The method can highlight the change area of the current frame image and the reference frame image so as to determine whether the change area is a normal effective change area of the image or an image change area caused by noise in the following process.
Optionally, determining the noise region of the first gray scale image of the current frame image according to the euclidean distance includes: under the condition that the Euclidean distance is smaller than or equal to a first preset distance, determining a pixel corresponding to the Euclidean distance as a noise region; determining a pixel corresponding to the Euclidean distance as a suspected noise area under the condition that the Euclidean distance is greater than a first preset distance and less than or equal to a second preset distance; and under the condition that the Euclidean distance is greater than a second preset distance, determining the pixel corresponding to the Euclidean distance as an effective region.
The first grayscale image and the second grayscale are pushed to the euclidean distance dist (x, y) of the corresponding pixel:
Figure BDA0002750654700000073
here, currgray image (x, y) represents the gray scale value of the pixel point (x, y) in the first gray scale image of the current frame image, and refgray image (x, y) represents the gray scale value of the pixel point (x, y) in the second gray scale image of the reference frame image.
Optionally, when the euclidean distance is greater than the first preset distance and less than or equal to the second preset distance, after determining that the pixel corresponding to the euclidean distance is the suspected noise area, the method further includes: according to a reference area of a reference frame image corresponding to the suspected noise area; determining a first gray level change mean value curve of a suspected noise area and a second gray level change mean value curve of a reference area; determining the change similarity of the first gray level change mean value curve and the second gray level change mean value curve; according to the change similarity, under the condition that the first gray scale change mean value curve is not similar to the second gray scale change mean value curve, determining the pixels of the suspected noise area as the noise area; and under the condition that the first gray level change mean curve is similar to the second gray level change mean curve, determining the pixels of the suspected noise area as an effective area.
A further decision needs to be made as to whether the suspected noise area belongs to an active area or a noise area. Through the steps, whether the suspected noise area is an effective area or a noise area can be accurately determined, and the accuracy of determining the noise area is improved, so that the accuracy of denoising is improved.
Optionally, determining a first gray-scale variation mean curve of the suspected noise area and a second gray-scale variation mean curve of the reference area includes: determining a first row direction gray level mean value curve and a first column direction gray level mean value curve of a suspected noise area, and a second row direction gray level mean value curve and a second column direction gray level mean value curve of a reference area, wherein the first gray level change mean value curve comprises the first row direction gray level mean value curve and the first column direction gray level mean value curve, and the second gray level change mean value curve comprises the second row direction gray level mean value curve and the second column direction gray level mean value curve; determining the change similarity of the first gray scale change mean curve and the second gray scale change mean curve comprises: and determining a first change similarity according to the first row direction gray scale mean curve and the second row direction gray scale mean curve, and determining a second change similarity according to the first column direction gray scale mean curve and the second column direction gray scale mean curve, wherein the change similarities comprise the first change similarity and the second change similarity.
The first row direction gray level mean curve may be:
Figure BDA0002750654700000081
in the formula, Cur _ change _ current _ row is the gray-scale mean curve in the first row direction, M is the number of rows of pixel points in the suspected noise region,
Figure BDA0002750654700000084
the pixel points in the suspected noise area (row, col) represent the row-column coordinates of the pixel points in the suspected noise area, row represents the row, and col represents the column.
The second row direction gray level mean curve may be:
Figure BDA0002750654700000082
in the formula, Ref _ change _ current _ row is the gray level mean curve in the second row direction, M is the number of rows of pixel points in the suspected noise region,
Figure BDA0002750654700000083
and (row, col) represents the row-column coordinates of the pixel points in the reference area, row represents a row, and col represents a column, which correspond to the suspected noise area.
The first column direction gray scale mean curve may be:
Figure BDA0002750654700000091
in the formula, Cur _ change _ curve _ col is the gray-scale mean curve in the first row direction, N is the number of rows of pixels in the suspected noise region,
Figure BDA0002750654700000092
the pixel points in the suspected noise area (row, col) represent the row-column coordinates of the pixel points in the suspected noise area, row represents the row, and col represents the column.
The second column direction gray scale mean curve may be:
Figure BDA0002750654700000093
in the formula, Ref _ change _ current _ col is the second row direction gray scale mean curve, N is the number of rows of pixels in the suspected noise region,
Figure BDA0002750654700000094
and (row, col) represents the row-column coordinates of the pixel points in the reference area, row represents a row, and col represents a column, which correspond to the suspected noise area.
The reference area and the suspected noise area are the same in size and are both M x N.
And determining a first change similarity according to the first row direction gray scale mean curve and the second row direction gray scale mean curve, and determining a second change similarity according to the first column direction gray scale mean curve and the second column direction gray scale mean curve, wherein the change similarities comprise the first change similarity and the second change similarity. Thereby determining the change similarity of the first gray scale change mean curve and the second gray scale change mean curve.
Optionally, determining that the pixel of the suspected noise area is a noise area when the first gray scale change mean curve is not similar to the second gray scale change mean curve according to the change similarity; determining the pixels of the suspected noise area as the effective area under the condition that the first gray scale variation mean curve is similar to the second gray scale variation mean curve comprises: under the condition that the first change similarity does not reach a first similarity threshold value or the second change similarity does not reach a second similarity threshold value, determining that the first gray scale change mean curve is not similar to the second gray scale change mean curve, and setting the pixels of the suspected noise area as the noise area; and under the condition that the first change similarity reaches a first similarity threshold value and the second change similarity reaches a second similarity threshold value, determining that the first gray scale change mean value curve is similar to the second gray scale change mean value curve, and taking the pixels of the suspected noise area as the effective area.
When determining a first row direction gray scale mean curve and a second row direction gray scale mean curve, determining a first change similarity, and determining a first column direction gray scale mean curve and a second column direction gray scale mean curve, and determining a second change similarity, judging whether the two images are similar blocks by comparing Mean Square Error (MSE) between the gray scale change curves of the corresponding image blocks, specifically, determining the mean square error of the first row direction gray scale mean curve and the second row direction gray scale mean curve as the first similarity, and the mean square error of the first column direction gray scale mean curve and the second column direction gray scale mean curve as the second change similarity. Therefore, whether the suspected noise area is an effective area or a noise area is accurately determined, the accuracy of determining the noise area is improved, and the accuracy of denoising is improved.
Optionally, determining the change similarity between the first gray scale change mean curve and the second gray scale change mean curve includes: performing exponential transformation on the first gray scale change mean value curve and the second gray scale change mean value curve; and calculating the mean square error of the first gray scale change mean curve and the second gray scale change mean curve after the exponential transformation to obtain the change similarity.
Through exponential transformation, the change conditions of the first gray scale change mean value curve and the second gray scale change mean value curve are amplified numerically and are convenient to compare, so that the comparison accuracy of the first gray scale change mean value curve and the second gray scale change mean value curve is improved, and the judgment accuracy of the suspected noise area is improved.
It should be noted that this embodiment also provides an alternative implementation, which is described in detail below.
The embodiment provides a method for screening out noise regions and change regions by comparing the similarity of maximum inter-class difference image regions of adjacent frames through analyzing the characteristics of noise doping and the influence of noise on a coding mode, and achieves the effect of reducing coding code streams by replacing the noise regions of a current frame with the numerical values of the corresponding regions of a reference frame.
In order to solve the problem of increased coding code stream caused by noise doping in image coding, the present embodiment proposes a method for comparing the similarity of maximum inter-class difference image regions of adjacent frames, where the maximum inter-class difference image is the first gray level image of the current frame image and the second gray level image of the reference frame image; and judging whether the area is a noise area or an effective change area, and replacing the image of the noise area of the current frame with the corresponding reference frame image, thereby reducing the number of change macro blocks in interframe coding, increasing the number of change macro blocks and achieving the effect of reducing the code stream for coding one frame.
The denoising algorithm described in this embodiment may be performed in three steps:
calculating a maximum inter-class difference gray image of a current frame and a reference frame, namely determining a first gray image of the current frame image and a second gray image of the reference frame image;
since the input image is a YUV image, in order to highlight the change area between the current frame and the reference frame, the embodiment calculates the difference between the Y/U/V three channels of the current frame and the reference frame, and sets the gray value of the corresponding coordinate as the corresponding channel value of the maximum inter-class difference, as shown in the following formula: cur denotes a current frame image-related parameter, ref denotes a reference frame image-related parameter, and the same applies below.
diffy(x,y)=|cury(x,y)-refy(x,y)|
diffu(x,y)=|curu(x,y)-refu(x,y)|
diffv(x,y)=|curv(x,y)-refv(x,y)|
diffy(x, Y) is the difference of the Y channels, diffu(x, y) is the difference of the U channels, diffv(x, y) is the difference of the V channels, cury(x, Y) is the gray value of the pixel point with the coordinate (x, Y) of the pixel point of the current frame image in the Y channel, refy(x, Y) is the gray value cur of the pixel point with the coordinate (x, Y) of the pixel point of the reference frame image in the Y channelu(x, y) is the gray value of the pixel point with the coordinate (x, y) of the pixel point of the current frame image in the U channel, refu(x, y) is the gray value cur of the pixel point with the coordinate (x, y) of the pixel point of the reference frame image in the U channelv(x, y) is the gray value of the pixel point with the coordinate (x, y) of the pixel point of the current frame image in the V channel, refv(x, y) is the gray value of a pixel point with the coordinate (x, y) of the pixel point of the reference frame image in the V channel;
maxDiff(x,y)=max{diffy(x,y),diffu(x,y),diffv(x,y)}
minDiff(x,y)=min{diffy(x,y),diffu(x,y),diffv(x,y)}
maxDeff (x, Y) is the maximum difference of the differences between the three Y/U/V channels, minDiff (x, Y) is the minimum difference of the differences between the three Y/U/V channels,
then there is
Figure BDA0002750654700000111
Figure BDA0002750654700000112
The currgray image (x, y) is a first gray image of the current frame image, i.e. a maximum inter-class difference gray image, and the refgray image (x, y) is a second gray image of the reference frame image, i.e. a maximum inter-class difference gray image.
Secondly, preliminarily screening a noise area and a change area of the current frame image according to the Euclidean distance
According to the gray level images of the current frame and the reference frame obtained in the previous step, in order to preliminarily screen out a noise area and a change area in the image, the Euclidean distance dist (x, y) of a corresponding pixel between two frames of gray level images is calculated, and the following formula is shown:
Figure BDA0002750654700000121
setting the threshold th1 eliminates the interference terms, i.e. noise interference areas, and setting the threshold th2 to extract the effective change area between the frame images, i.e. the area where the current frame changes from the reference frame, the threshold th1 set in this embodiment is 1.5, and the threshold th2 is 12, as shown in equation two:
Figure BDA0002750654700000122
in the formula, imgRecon _ Map is a current frame image reconstruction window function, imgRecon _ Map(x,y)The image data is a window function corresponding to the pixel point (x, y), that is, actual data of the pixel point (x, y) in the current frame image, wherein a region equal to 0 is a reference frame image corresponding to a noise region, a region equal to 1 is an image of an effective change region corresponding to the current frame image, and a region equal to 2 is a suspected noise region, which needs to be further determined.
Thirdly, calculating the similarity of the suspected noise area
1) Suspected noise region block extraction
Fig. 2-1 is a schematic diagram of a noise point position according to an embodiment of the present invention, fig. 2-2 is a schematic diagram of a noise area where a noise point is located in a current frame image according to an embodiment of the present invention, fig. 2-3 is a schematic diagram of an alternative area of a reference frame image corresponding to a noise area according to an embodiment of the present invention, as shown in fig. 2-1, 2-2, and 2-3, coordinate positions of corresponding pixel points ImgRecon _ Map equal to 2 in a first gray scale image and a second gray scale image of a current frame and a reference frame are located, and 50 × 50 image Block areas Ref _ IMG _ Block and Cur _ IMG _ Block centered on the coordinate points are extracted.
2) Region block similarity calculation
Respectively calculating a gray level change mean curve in the row and column directions of the extracted current frame image Block Ref _ IMG _ Block and the extracted reference frame image Block Cur _ IMG _ Block:
for(row=1:50)
Figure BDA0002750654700000123
Figure BDA0002750654700000131
for col=1:50
Figure BDA0002750654700000132
Figure BDA0002750654700000133
ref _ change _ curve _ row is a reference frame row direction gray scale change curve, Cur _ change _ curve _ row is a current frame row direction gray scale change curve, Ref _ change _ curve _ col is a reference frame column direction gray scale change curve, and Cur _ change _ curve _ col is a current frame column direction gray scale change curve. And respectively calculating the sum of the gray values of each row and each column of the extraction block, and comparing the number of pixels to obtain a gray change mean curve of the current frame and the reference frame image extraction block. Whether the two area blocks are similar blocks or not is obtained by comparing the change similarity of the mean value curve in the row/column direction of the reference frame and the current frame, if so, the value in the area block corresponding to the reconstruction window function ImgRecon _ Map is set to be 0, namely, the area is filled with the reference frame image, and if not, the middle coordinate of the reconstruction window function ImgRecon _ Map is set to be 1, which indicates that the pixel value of the point is the current frame image. In the embodiment, the Mean Square Error (MSE) between the gray scale change curves of the corresponding image blocks is compared to determine whether the two images are similar blocks.
Through experimental observation, the embodiment discovers that when the block similarity between the current frame image and the reference frame image is compared, for the image area with obvious change, the gray change curve between the two is obvious, a larger numerical value can be calculated on the mean square error value (MSE) in a reaction mode, but for the image area with less obvious change, as the gray change curve is calculated in the last step, the area with the less obvious change is less obvious on the gray change curve, and finally the calculated MSE value does not meet the set threshold value, so that the effective change area is identified as a noise area, and misjudgment is caused.
Therefore, the present embodiment amplifies the gradation change characteristic of the effective region of the image by the exponential transformation, and does not affect the change characteristic of the noise region, as shown in the following equation:
new_Change_curve_row=exp(Ref_change_curve_row-Cur_change_curve_row)new_Change_curve_col=exp(Ref_change_curve_col-Cur_change_curve_col)
and calculating the mean error between the two, as follows:
Figure BDA0002750654700000141
Figure BDA0002750654700000142
setting the value of a reconstruction window function imgRecon _ Map according to the calculated inter-frame block similarity:
Figure BDA0002750654700000143
fourth, current frame reconstruction
Reconstructing a current frame image according to a reconstruction window function ImgRecon _ Map:
Figure BDA0002750654700000144
the number of the unchanged macro blocks between the current frame and the reference frame is increased in the interframe coding of the current frame after the denoising processing and reconstruction, thereby avoiding the repeated coding of the macro blocks caused by the influence of quantization noise and greatly reducing the coding transmission code rate between frame images.
Fig. 3 is a flowchart of image processing according to an embodiment of the invention, as shown in fig. 3, starting to read the current image Cur _ frame of the nth frame, counting the nth frame image of the video stream where the current frame is located, where n is equal to 1, and n is a positive integer. Determining whether the extracted nth frame current image Cur _ frame is the first frame, if so, updating the reference frame Ref _ frame, and taking the first frame current image Cur _ frame as the reference frame; if not, calculating the difference value of three image channels between the current frame and the reference frame image, and determining a first gray scale graph cur _ gray and a second gray scale graph ref _ gray; then, calculating a maximum inter-class difference gray level image of the current frame image and the reference frame image, namely the Euclidean distance DIST between the first gray level image and the second gray level image; dividing the image into a noise area, a suspected noise area and an effective change area ImgRecon _ Map according to the Euclidean distance; extracting coordinates Add (i) of a suspected noise area in the ImgRecon _ Map, wherein i is the ith suspected noise area in the ImgRecon _ Map; extracting area blocks of 50 × 50 around the position of the corresponding coordinate add (i) in cur _ gray and ref _ gray; calculating gray scale change curves in the directions of a horizontal axis and a vertical axis of the two area blocks, namely a first gray scale change curve and a second gray scale change curve; performing exponential operation on the difference value of the gray level change curves of the current frame image and the reference frame image in the same direction, and amplifying the numerical value of a gentle change area; calculating the Mean Square Error (MSE) between the two area blocks, reassigning the position of a coordinate Add (i) in the ImgRecon _ Map according to the MSE value, and re-determining the area of the position of the coordinate Add (i) as a noise area or an effective area; determining whether all suspected noise areas in the imgRecon _ Map are processed or not, if not, adding 1 to i to assign a value to i, and if i is i +1, continuing the steps, and re-determining that the area where the coordinate Add (i) is located is a noise area or an effective area; if so, reconstructing a current frame image Cur _ frame according to the ImgRecon _ Map; updating a reference frame Ref _ frame, wherein the Ref _ frame is Cur _ frame, and taking the reconstructed current frame image as a new reference frame for the next use; and coding the reconstructed current frame image, determining whether the frame sequence of the current frame image is processed completely, if so, ending, otherwise, assigning a value to n, and if not, continuing to code the subsequent frame image until all the frame images of the frame sequence are coded completely.
Fig. 4 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention, and as shown in fig. 4, according to another aspect of the embodiment of the present invention, there is also provided an image processing apparatus including: a first determination module 42, a calculation module 44, a second determination module 46, and a replacement module 48, which are described in detail below.
A first determining module 42, configured to determine a first grayscale image of a current frame image to be encoded and a second grayscale image of a reference frame image; a calculating module 44, connected to the first determining module 42, for calculating an euclidean distance between pixels of the first gray scale image and the second gray scale image; a second determining module 46, connected to the calculating module 44, for determining a noise region of the first gray scale image of the current frame image according to the euclidean distance; and a replacing module 48, connected to the second determining module 46, configured to replace the noise region of the current frame image with a replacing region of the reference frame image, so as to obtain an updated current frame image to be encoded, where the replacing region is a region corresponding to the noise region in the reference frame image.
By the device, a first determining module 42 is adopted to determine a first gray level image of a current frame image to be coded and a second gray level image of a reference frame image; the calculation module 44 calculates an euclidean distance between pixels of the first gray scale image and the second gray scale image; (ii) a The second determining module 46 determines a noise area of the first gray image of the current frame image according to the euclidean distance; the replacing module 48 replaces the noise area of the current frame image with the replacing area of the reference frame image to obtain the updated current frame image to be encoded, wherein the replacing area is the area corresponding to the noise area in the reference frame image, the noise area in the current frame image is determined by the euclidean distance between the pixels of the first gray scale image of the current frame image and the pixels of the second gray scale image of the reference frame image, the replacing area of the reference frame image corresponding to the noise area is replaced, the noise area of the current frame image is removed, the coding redundancy caused by the noise area of the current frame image is avoided, the current frame image is effectively removed noise, the coding purpose of the current frame image is effectively simplified, the technical effect of reducing the coding code stream of the current frame is realized, and the problem that the image coding in the related technology is doped by noise is solved, redundant coding is generated, and the technical problem of increasing coded code streams is caused.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes to perform the image processing method of any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided a computer storage medium including a stored program, wherein when the program runs, an apparatus in which the computer storage medium is located is controlled to execute the image processing method of any one of the above.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable computer storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a computer storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned computer storage media include: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An image processing method, comprising:
determining a first gray level image of a current frame image to be coded and a second gray level image of a reference frame image;
calculating to obtain Euclidean distance between pixels of the first gray level image and the second gray level image;
determining a noise area of a first gray level image of the current frame image according to the Euclidean distance;
and replacing the noise area of the current frame image with the replacement area of the reference frame image to obtain an updated current frame image to be coded, wherein the replacement area is an area corresponding to the noise area in the reference frame image.
2. The method of claim 1, wherein determining a first grayscale image of the current frame image to be encoded and a second grayscale image of a reference frame image comprises:
determining the difference value of the gray levels of pixel points of the current frame image and the reference frame image in Y/U/V three channels respectively, wherein the current frame image and the reference frame image are YUV images;
determining the maximum difference and the minimum difference of the differences of the Y/U/V three channels;
under the condition that the difference between the maximum difference value and the minimum difference value meets a preset condition, taking the gray value of the channel corresponding to the maximum difference value as the gray value of the pixel coordinates of the current frame image and the reference frame image; and under the condition that the difference between the maximum difference value and the minimum difference value does not meet a preset condition, taking the gray value of the channel corresponding to the minimum difference value as the gray value of the pixel coordinates of the current frame image and the reference frame image to obtain the first gray image and the second gray image.
3. The method of claim 1, wherein determining the noise region of the first gray scale image of the current frame image according to the Euclidean distance comprises:
under the condition that the Euclidean distance is smaller than or equal to a first preset distance, determining a pixel corresponding to the Euclidean distance as a noise region;
determining that a pixel corresponding to the Euclidean distance is a suspected noise area when the Euclidean distance is greater than a first preset distance and less than or equal to a second preset distance;
and under the condition that the Euclidean distance is greater than a second preset distance, determining a pixel corresponding to the Euclidean distance as an effective region.
4. The method according to claim 3, wherein when the Euclidean distance is greater than a first preset distance and less than or equal to a second preset distance, after determining that a pixel corresponding to the Euclidean distance is a suspected noise region, the method further comprises:
according to the reference area of the reference frame image corresponding to the suspected noise area;
determining a first gray scale change mean curve of the suspected noise area and a second gray scale change mean curve of the reference area;
determining the change similarity of the first gray scale change mean value curve and the second gray scale change mean value curve;
according to the change similarity, under the condition that the first gray scale change mean value curve is not similar to the second gray scale change mean value curve, determining the pixels of the suspected noise area as a noise area; and under the condition that the first gray scale change mean curve is similar to the second gray scale change mean curve, determining that the pixels of the suspected noise area are effective areas.
5. The method of claim 4, wherein determining a first mean-of-gray-variation curve for the suspected noise region and a second mean-of-gray-variation curve for the reference region comprises:
determining a first row direction gray scale mean curve and a first column direction gray scale mean curve of the suspected noise area, and a second row direction gray scale mean curve and a second column direction gray scale mean curve of the reference area, wherein the first gray scale change mean curve comprises the first row direction gray scale mean curve and the first column direction gray scale mean curve, and the second gray scale change mean curve comprises the second row direction gray scale mean curve and the second column direction gray scale mean curve;
determining the change similarity of the first and second mean change curves comprises:
determining a first change similarity according to the first row direction gray scale mean curve and the second row direction gray scale mean curve, and determining a second change similarity according to the first column direction gray scale mean curve and the second column direction gray scale mean curve, wherein the change similarity comprises the first change similarity and the second change similarity.
6. The method according to claim 5, wherein if it is determined that the first gray scale variation mean curve is not similar to the second gray scale variation mean curve according to the variation similarity, determining a pixel of the suspected noise region as a noise region; determining that the pixels of the suspected noise area are effective areas when the first gray scale variation mean curve is similar to the second gray scale variation mean curve comprises:
determining that the first gray scale change mean curve is not similar to the second gray scale change mean curve under the condition that the first change similarity does not reach a first similarity threshold or the second change similarity does not reach a second similarity threshold, wherein pixels of the suspected noise area are noise areas;
and under the condition that the first change similarity reaches a first similarity threshold value and the second change similarity reaches a second similarity threshold value, determining that the first gray scale change mean curve is similar to the second gray scale change mean curve, and taking the pixels of the suspected noise area as an effective area.
7. The method of claim 5, wherein determining the similarity of the first mean curve of gray scale changes to the second mean curve of gray scale changes comprises:
performing exponential transformation on the first gray scale variation mean value curve and the second gray scale variation mean value curve;
and calculating the mean square error of the first gray scale change mean curve and the second gray scale change mean curve after the exponential transformation to obtain the change similarity.
8. An image processing apparatus characterized by comprising:
the first determining module is used for determining a first gray level image of a current frame image to be coded and a second gray level image of a reference frame image;
the calculation module is used for calculating and obtaining the Euclidean distance between pixels of the first gray level image and the second gray level image;
the second determining module is used for determining a noise area of the first gray level image of the current frame image according to the Euclidean distance;
and the replacing module is used for replacing the noise area of the current frame image with the replacing area of the reference frame image to obtain the updated current frame image to be coded, wherein the replacing area is an area corresponding to the noise area in the reference frame image.
9. A computer storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer storage medium is located to perform the image processing method according to any one of claims 1 to 7.
10. A processor, characterized in that the processor is configured to run a program, wherein the program is configured to execute the image processing method according to any one of claims 1 to 7 when running.
CN202011182890.4A 2020-10-29 2020-10-29 Image processing method and device and processor Pending CN112308796A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011182890.4A CN112308796A (en) 2020-10-29 2020-10-29 Image processing method and device and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011182890.4A CN112308796A (en) 2020-10-29 2020-10-29 Image processing method and device and processor

Publications (1)

Publication Number Publication Date
CN112308796A true CN112308796A (en) 2021-02-02

Family

ID=74331760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011182890.4A Pending CN112308796A (en) 2020-10-29 2020-10-29 Image processing method and device and processor

Country Status (1)

Country Link
CN (1) CN112308796A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025124198A1 (en) * 2023-12-13 2025-06-19 天翼云科技有限公司 Grayscale image compression method in weak network environment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020017526A (en) * 2000-08-30 2002-03-07 신익수 A Principal Distance Based Error Diffusion Technique
WO2006006373A1 (en) * 2004-07-07 2006-01-19 Nikon Corporation Image processor and computer program product
CN103679173A (en) * 2013-12-04 2014-03-26 清华大学深圳研究生院 Method for detecting image salient region
US20190340738A1 (en) * 2016-12-28 2019-11-07 Karl-Franzens-Universitat Graz Method and device for image processing
CN110782501A (en) * 2019-09-09 2020-02-11 西安万像电子科技有限公司 Image processing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020017526A (en) * 2000-08-30 2002-03-07 신익수 A Principal Distance Based Error Diffusion Technique
WO2006006373A1 (en) * 2004-07-07 2006-01-19 Nikon Corporation Image processor and computer program product
CN103679173A (en) * 2013-12-04 2014-03-26 清华大学深圳研究生院 Method for detecting image salient region
US20190340738A1 (en) * 2016-12-28 2019-11-07 Karl-Franzens-Universitat Graz Method and device for image processing
CN110782501A (en) * 2019-09-09 2020-02-11 西安万像电子科技有限公司 Image processing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李大成;杨晓东;: "一种基于灰度投影差值的稳像算法", 舰船电子工程, no. 01, 20 January 2017 (2017-01-20) *
祝严刚;张桂梅;: "一种改进的非局部均值图像去噪算法", 计算机工程与应用, no. 18, 15 September 2017 (2017-09-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025124198A1 (en) * 2023-12-13 2025-06-19 天翼云科技有限公司 Grayscale image compression method in weak network environment

Similar Documents

Publication Publication Date Title
CN106231214B (en) An Approximate Lossless Compression Method for High-speed CMOS Sensor Images Based on Adjustable Macroblocks
KR100422935B1 (en) Picture encoder, picture decoder, picture encoding method, picture decoding method, and medium
US20070280551A1 (en) Removing ringing and blocking artifacts from JPEG compressed document images
CN110324617B (en) Image processing method and device
CN113989168B (en) Self-adaptive non-local mean value filtering method for spiced salt noise
CN111372080B (en) Processing method and device of radar situation map, storage medium and processor
CN110162986B (en) Reversible information hiding method based on adjacent pixel prediction model
CN104992419A (en) Super pixel Gaussian filtering pre-processing method based on JND factor
CN116233479A (en) Live broadcast information content auditing system and method based on data processing
CN1224274C (en) Error concealment method and device
CN112837238A (en) Image processing method and device
CN110782501B (en) Image processing method and device
CN112308796A (en) Image processing method and device and processor
CN111770334A (en) Data encoding method and device, and data decoding method and device
CN112669328B (en) Medical image segmentation method
US20100239019A1 (en) Post processing of motion vectors using sad for low bit rate video compression
CN107241597A (en) A kind of reversible information hidden method of combination quaternary tree adaptive coding
US8811766B2 (en) Perceptual block masking estimation system
CN113744158B (en) Image generation method, device, electronic equipment and storage medium
CN113810692B (en) Method for framing changes and movements, image processing device and program product
CN111327909B (en) A fast depth coding method for 3D-HEVC
JPH08322041A (en) Block distortion eliminator
CN111080550B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
Xiong et al. Image coding with parameter-assistant inpainting
WO2015128302A1 (en) Method and apparatus for filtering and analyzing a noise in an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination