Nothing Special   »   [go: up one dir, main page]

CN119182874A - Video deinterlacing processing method, device, electronic device and storage medium - Google Patents

Video deinterlacing processing method, device, electronic device and storage medium Download PDF

Info

Publication number
CN119182874A
CN119182874A CN202411038041.XA CN202411038041A CN119182874A CN 119182874 A CN119182874 A CN 119182874A CN 202411038041 A CN202411038041 A CN 202411038041A CN 119182874 A CN119182874 A CN 119182874A
Authority
CN
China
Prior art keywords
pixel
field image
interpolated
image
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411038041.XA
Other languages
Chinese (zh)
Inventor
彭青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qstech Co Ltd
Original Assignee
Qstech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qstech Co Ltd filed Critical Qstech Co Ltd
Priority to CN202411038041.XA priority Critical patent/CN119182874A/en
Publication of CN119182874A publication Critical patent/CN119182874A/en
Pending legal-status Critical Current

Links

Landscapes

  • Television Systems (AREA)

Abstract

本申请涉及一种视频去隔行处理方法、装置、电子设备以及存储介质,该方法包括:获取当前场图像、当前场图像的前一场图像以及当前场图像的后一场图像;若当前场图像的待插值像素点为静止像素点,根据前一场图像的场数据以及后一场图像的场数据,对待插值像素点进行插值;若当前场图像的待插值像素点为运动像素点,根据前一场图像的场数据、后一场图像的场数据以及当前场图像的场数据,对待插值像素点进行插值;获得插值后的当前场图像,可以避免插值后的当前场图像出现图像内容失真、模糊以及抖动。

The present application relates to a video deinterlacing processing method, device, electronic device and storage medium, the method comprising: obtaining a current field image, a previous field image of the current field image and a subsequent field image of the current field image; if the pixel points to be interpolated of the current field image are stationary pixels, interpolating the pixel points to be interpolated according to the field data of the previous field image and the field data of the subsequent field image; if the pixel points to be interpolated of the current field image are moving pixels, interpolating the pixel points to be interpolated according to the field data of the previous field image, the field data of the subsequent field image and the field data of the current field image; obtaining the interpolated current field image, which can avoid image content distortion, blurring and jitter in the interpolated current field image.

Description

Video de-interlacing processing method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of video signal processing technologies, and in particular, to a video de-interlacing processing method, apparatus, electronic device, and storage medium.
Background
In order to reduce transmission bandwidth and compressed storage space, the conventional analog television industry commonly adopts an interlaced sampling technology, that is, a frame of complete video image is divided into two parts of an odd field and an even field to be transmitted respectively, so that interlaced video is generated. However, such interlaced video cannot be directly displayed on progressive display devices in the current digital television industry, and therefore, the interlaced video needs to be converted into progressive video, which generates a video de-interlacing technology, that is, interpolation processing is performed on data of a blank line in the interlaced video.
In the related art, interpolation is performed on an even line in an odd field for an odd field in interlaced video, which is to copy the data of the last line of the even line to the even line. For even fields in interlaced video, the odd lines in the even fields are interpolated by copying the next line data for the odd lines to the odd lines.
However, this simple method of duplicated line processing can result in distortion, blurring, and dithering of the image content of the interpolated interlaced video.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a video de-interlacing processing method, apparatus, electronic device, and storage medium, which have the advantage of avoiding distortion, blurring, and jitter of image content in interlaced video after interpolation processing.
According to a first aspect of an embodiment of the present application, there is provided a video de-interlacing processing method, including the steps of:
Acquiring a current field image, a previous field image of the current field image and a next field image of the current field image, wherein the current field image is an image obtained by conducting interlacing sampling on a video image of a current frame, the previous field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a previous frame of the current frame, and the next field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a next frame of the current frame;
If the pixel point to be interpolated of the current field image is a static pixel point, interpolating the pixel point to be interpolated according to the field data of the previous field image and the field data of the next field image, wherein the pixel point to be interpolated is a pixel point of a pixel value to be filled;
if the pixel point to be interpolated of the current field image is a motion pixel point, interpolating the pixel point to be interpolated according to the field data of the previous field image, the field data of the next field image and the field data of the current field image;
and obtaining the interpolated current field image.
According to a second aspect of an embodiment of the present application, there is provided a video de-interlacing processing apparatus including:
The system comprises a field image acquisition module, a display module and a display module, wherein the field image acquisition module is used for acquiring a current field image, a previous field image of the current field image and a next field image of the current field image, wherein the current field image is an image obtained by conducting interlacing sampling on a video image of a current frame, the previous field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a previous frame of the current frame, and the next field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a next frame of the current frame;
The static pixel point interpolation module is used for interpolating the pixel points to be interpolated according to the field data of the previous field image and the field data of the next field image if the pixel points to be interpolated of the current field image are static pixel points;
The motion pixel interpolation module is used for interpolating the pixel to be interpolated according to the field data of the previous field image, the field data of the next field image and the field data of the current field image if the pixel to be interpolated of the current field image is the motion pixel;
and the current field image obtaining module after interpolation is used for obtaining the current field image after interpolation.
According to a third aspect of an embodiment of the present application, there is provided an electronic device including a parity field arbiter, a memory, and a data processor;
the parity field discriminator is connected with the memory, and the memory is connected with the data processor;
The parity field discriminator is used for receiving the current field image, the previous field image of the current field image and the next field image of the current field image, and transmitting the current field image, the previous field image of the current field image and the next field image of the current field image to the memory, wherein the current field image is an image obtained by conducting interlacing sampling on a video image of a current frame, the previous field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a previous frame of the current frame, and the next field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a next frame of the current frame;
The memory is used for transmitting the current field image, the previous field image of the current field image and the next field image of the current field image to the data processor;
the data processor is used for interpolating pixel points to be interpolated in the current field image according to the video de-interlacing processing method, and obtaining the current field image after interpolation.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a video de-interlacing processing method as in any of the above.
The embodiment of the application obtains a current field image, a previous field image of the current field image and a next field image of the current field image, wherein the current field image is an image obtained by conducting interlacing sampling on a video image of a current frame, the previous field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a previous frame of the current frame, the next field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a next frame of the current frame, if a pixel point to be interpolated of the current field image is a static pixel point, the pixel point to be interpolated is an interpolation pixel point according to field data of the previous field image and field data of the next field image, wherein the pixel point to be interpolated is a pixel point of a pixel value to be filled, and if the pixel point to be interpolated of the current field image is a motion pixel point, the pixel point to be interpolated is obtained according to field data of the previous field image, field data of the next field image and field data of the current field image. The method comprises the steps of detecting motion of a pixel point to be interpolated of a current field image, determining whether the pixel point to be interpolated is a static pixel point or a motion pixel point, conducting interpolation processing on the static pixel point according to adjacent field data of the current field image when the pixel point to be interpolated is the static pixel point, and conducting interpolation processing on the motion pixel point according to adjacent field data of the current field image and field data of the current field image when the pixel point to be interpolated is the motion pixel point. By adopting different interpolation processing modes for the static pixel points and the moving pixel points, the interpolation precision of the pixel points to be interpolated in the current field image is improved, and distortion, blurring and jitter of the image content of the interpolated current field image are avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
For a better understanding and implementation, the present invention is described in detail below with reference to the drawings.
Drawings
Fig. 1 is a flow chart of a video de-interlacing processing method according to an embodiment of the present application;
fig. 2 is a block diagram of a video de-interlacing processing device according to an embodiment of the present application;
Fig. 3 is a schematic block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application as detailed in the accompanying claims. In the description of the present application, it should be understood that the terms "first," "second," "third," and the like are used merely to distinguish between similar objects and are not necessarily used to describe a particular order or sequence, nor should they be construed to indicate or imply relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
Furthermore, in the description of the present application, unless otherwise indicated, "a plurality" means two or more. "and/or" describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate that there are three cases of a alone, a and B together, and B alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The video de-interlacing processing method provided by the embodiment of the application can be applied to scenes of converting interlaced video into progressive video. In the related art, interpolation is performed on an even line in an odd field for an odd field in interlaced video, which is to copy the data of the last line of the even line to the even line. For even fields in interlaced video, the odd lines in the even fields are interpolated by copying the next line data for the odd lines to the odd lines.
The inventor finds that the copy line processing method in the related art can cause problems of image content distortion, blurring, jitter and the like of the interlaced video after interpolation processing in the process of realizing the invention.
The application detects motion of pixel point to be interpolated of current field image, determines whether pixel point to be interpolated is static pixel point or motion pixel point, carries out interpolation process to static pixel point according to adjacent field data of current field image when pixel point to be interpolated is static pixel point, carries out interpolation process to motion pixel point according to adjacent field data of current field image and field data of current field image when pixel point to be interpolated is motion pixel point. By adopting different interpolation processing modes for the static pixel points and the moving pixel points, the interpolation precision of the pixel points to be interpolated in the current field image is improved, and distortion, blurring and jitter of the image content of the interpolated current field image are avoided.
Fig. 1 is a flow chart of a video de-interlacing processing method according to an embodiment of the application. The embodiment of the application provides a video de-interlacing processing method, which comprises the following steps:
s10, acquiring a current field image, a previous field image of the current field image and a next field image of the current field image, wherein the current field image is an image obtained by conducting interlacing sampling on a video image of a current frame, the previous field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a previous frame of the current frame, and the next field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a next frame of the current frame;
s20, if the pixel point to be interpolated of the current field image is a static pixel point, interpolating the pixel point to be interpolated according to the field data of the previous field image and the field data of the next field image, wherein the pixel point to be interpolated is a pixel point of a pixel value to be filled;
s30, if the pixel point to be interpolated of the current field image is a motion pixel point, interpolating the pixel point to be interpolated according to the field data of the previous field image, the field data of the next field image and the field data of the current field image;
And S40, obtaining the interpolated current field image.
The video de-interlacing processing method provided by the application comprises the steps of performing motion detection on a pixel to be interpolated of a current field image, determining whether the pixel to be interpolated is a static pixel or a motion pixel, performing interpolation processing on the static pixel according to adjacent field data of the current field image when the pixel to be interpolated is the static pixel, and performing interpolation processing on the motion pixel according to the adjacent field data of the current field image and the field data of the current field image when the pixel to be interpolated is the motion pixel. By adopting different interpolation processing modes for the static pixel points and the moving pixel points, the interpolation precision of the pixel points to be interpolated in the current field image is improved, and distortion, blurring and jitter of the image content of the interpolated current field image are avoided.
The following embodiments of the present application use a computer as an execution body to describe each step of a video de-interlacing processing method.
For step S10, a current field image, a previous field image of the current field image and a next field image of the current field image are obtained, wherein the current field image is an image obtained by conducting interlacing sampling on a video image of a current frame, the previous field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a previous frame of the current frame, and the next field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a next frame of the current frame.
The current field image, the previous field image of the current field image and the next field image of the current field image are all interlaced video images. Specifically, if the current field image is an odd field image, the previous field image of the current field image is an even field image, and the next field image of the current field image is an even field image. If the current field image is an even field image, the previous field image of the current field image is an odd field image, and the next field image of the current field image is an odd field image.
The interlaced video image can be an RGB format image or a YUV format image. If the interlaced video image is an RGB format image, the RGB format image may be converted into a YUV format image.
In the embodiment of the application, the current field image, the previous field image of the current field image and the next field image of the current field image can be directly acquired. Or firstly, acquiring continuous three-frame video images, and carrying out interlacing sampling processing on the continuous three-frame video images to respectively acquire a current field image, a previous field image of the current field image and a next field image of the current field image.
In an alternative embodiment, after step S10, the method further comprises the steps of judging the motion state of the pixel to be interpolated of the current field image, and judging the motion state of the pixel to be interpolated of the current field image, wherein the steps comprise:
s101, acquiring a first pixel value of a first pixel corresponding to a pixel to be interpolated in a previous field image, a second pixel value of a second pixel corresponding to the pixel to be interpolated in a subsequent field image, a third pixel value of a third pixel in a current field image and a fourth pixel value of a fourth pixel in the current field image, wherein the third pixel is an adjacent pixel above the pixel to be interpolated in the current field image, and the fourth pixel is an adjacent pixel below the pixel to be interpolated in the current field image;
S102, calculating a first absolute value difference value and a second average value of the first pixel value and the second pixel value;
s103, calculating a second absolute value difference value and a third average value of the third pixel value and the fourth pixel value;
S104, calculating a third absolute value difference value between the second average value and the third average value;
S105, judging whether the first absolute value difference value, the second absolute value difference value and the third absolute value difference value are smaller than or equal to a first preset threshold value, if so, determining that the pixel point to be interpolated of the current field image is a static pixel point, and if not, determining that the pixel point to be interpolated of the current field image is a motion pixel point.
The position of the first pixel point in the previous field image is the same as the position of the pixel point to be interpolated in the current field image. For example, if the pixel point to be interpolated is in the ith row and jth column in the current field image, the first pixel point is also in the ith row and jth column in the previous field image.
The position of the second pixel point in the later field image is the same as the position of the pixel point to be interpolated in the current field image. For example, if the pixel point to be interpolated is in the ith row and jth column in the current field image, the second pixel point is also in the ith row and jth column in the subsequent field image.
The first pixel value of the first pixel point comprises YUV channel data of the first pixel point, the second pixel value of the second pixel point comprises YUV channel data of the second pixel point, the third pixel value of the third pixel point comprises YUV channel data of the third pixel point, and the fourth pixel value of the fourth pixel point comprises YUV channel data of the fourth pixel point. The YUV channel data includes luminance values (Y channel data) and chrominance values (UV channel data).
The first absolute value difference is obtained by taking the absolute value of the difference between the first pixel value and the second pixel value. The second average value is obtained by averaging the first pixel value and the second pixel value. The second absolute value difference is obtained by taking the absolute value of the difference between the third pixel value and the fourth pixel value. The third average value is obtained by averaging the third pixel value and the fourth pixel value. And the third absolute value difference value is obtained by taking the absolute value of the difference value by taking the difference value as the difference between the second average value and the third average value.
The first preset threshold value can be set manually according to actual requirements. For example, the first preset threshold is 5.
In the embodiment of the application, if the first absolute value difference value, the second absolute value difference value and the third absolute value difference value are all smaller than or equal to the first preset threshold value, the pixel point to be interpolated is determined to be a static pixel point. And if the first absolute value difference value is larger than a first preset threshold value, or the second absolute value difference value is larger than the first preset threshold value, or the third absolute value difference value is larger than the first preset threshold value, determining the pixel point to be interpolated as a motion pixel point.
And determining whether the pixel point to be interpolated in the current field image is in a motion state or not by comparing the pixel difference between the adjacent field images of the current field image, the pixel difference between the upper line and the lower line in the current field image and the difference between the pixel average value of the adjacent field image of the current field image and the pixel average value of the upper line and the lower line in the current field image, thereby improving the accuracy and the reliability of motion state detection.
For step S20, if the pixel point to be interpolated of the current field image is a still pixel point, interpolating the pixel point to be interpolated according to the field data of the previous field image and the field data of the next field image, wherein the pixel point to be interpolated is a pixel point of a pixel value to be filled.
The field data of the previous field image refers to pixel values of each pixel point in the previous field image, and includes YUV channel data of each pixel point in the previous field image.
The field data of the next field image refers to pixel values of each pixel point in the next field image, and includes YUV channel data of each pixel point in the next field image.
The pixel point to be interpolated of the current field image refers to a pixel point to be filled with a pixel value in the current field image. Specifically, if the current field image is an odd field image, the pixel points in all even lines of the odd field image are pixel points to be interpolated. If the current field image is an even field image, the pixel points in all odd lines of the even field image are pixel points to be interpolated.
In the embodiment of the application, after the pixel point to be interpolated is determined to be a static pixel point, the pixel point to be interpolated is subjected to interpolation processing according to the field data of the previous field image and the field data of the next field image, so as to obtain the pixel value of the pixel point to be interpolated.
In an alternative embodiment, step S20 includes steps S201 to S202, which are specifically as follows:
s201, if the pixel point to be interpolated of the current field image is a static pixel point, acquiring a first pixel value of a first pixel point corresponding to the pixel point to be interpolated in the previous field image and a second pixel value of a second pixel point corresponding to the pixel point to be interpolated in the next field image;
s202, calculating a first average value of the first pixel value and the second pixel value, and taking the first average value as the pixel value of the pixel point to be interpolated.
The first average value is obtained by averaging the first pixel value and the second pixel value.
In the embodiment of the application, if the pixel to be interpolated is a stationary pixel, an average value of the first pixel and the second pixel value of the second pixel is used as the pixel value of the pixel to be interpolated.
When the pixel point to be interpolated is a static pixel point, the position of the static pixel point in different field images is kept unchanged, so that the static pixel point can be interpolated according to the pixel value of the corresponding pixel point in the adjacent field image of the current field image, and the blurring of the image content of the current field image after interpolation is avoided.
In an alternative embodiment, the field data of the previous field image includes the Y channel data of the previous field image and the UV channel data of the previous field image, the field data of the next field image includes the Y channel data of the next field image and the UV channel data of the next field image, and the step S20 includes steps S21-S22, specifically as follows:
S21, if the pixel point to be interpolated of the current field image is a static pixel point, obtaining YUV channel data of the pixel point to be interpolated according to Y channel data of a previous field image, UV channel data of the previous field image, Y channel data of the next field image and UV channel data of the next field image;
And S22, taking the YUV channel data as the pixel value of the pixel point to be interpolated.
Wherein, Y channel data is luminance value, and UV channel data is chromaticity value.
The Y channel data of the previous field image refers to the luminance value of each pixel point in the previous field image, and the UV channel data of the previous field image refers to the chrominance value of each pixel point in the previous field image.
The Y channel data of the subsequent field image refers to the luminance value of each pixel point in the subsequent field image, and the UV channel data of the subsequent field image refers to the chrominance value of each pixel point in the subsequent field image.
In the embodiment of the present application, if the pixel point to be interpolated of the current field image is a still pixel point, step S21 may refer to steps S201 to S202 to obtain Y channel data and UV channel data of the pixel point to be interpolated, which are not described herein. And splicing the Y channel data and the UV channel data into YUV channel data, and taking the YUV channel data as the pixel value of the pixel point to be interpolated.
If the pixel point to be interpolated of the current field image is a static pixel point, because the positions of the static pixel point in different field images are kept unchanged, the pixel point to be interpolated can be interpolated according to Y channel data and UV channel data of corresponding pixel points in adjacent field images of the current field image, and the Y channel data and the UV channel data of the pixel point to be interpolated are obtained, so that blurring of image contents of the interpolated current field image is avoided.
For step S30, if the pixel to be interpolated of the current field image is a motion pixel, the pixel to be interpolated is interpolated according to the field data of the previous field image, the field data of the next field image, and the field data of the current field image.
The field data of the current field image refers to pixel values of all pixels except the pixel to be interpolated in the current field image, and includes YUV channel data of all pixels except the pixel to be interpolated in the current field image.
In the embodiment of the application, after the pixel point to be interpolated is determined to be a motion pixel point, the pixel point to be interpolated is subjected to interpolation processing according to the field data of the previous field image, the field data of the current field image and the field data of the next field image, so as to obtain the pixel value of the pixel point to be interpolated.
In an alternative embodiment, step S30 includes steps S301 to S302, which are specifically as follows:
S301, if a pixel point to be interpolated of a current field image is a motion pixel point, acquiring a first pixel value of a first pixel point corresponding to the pixel point to be interpolated in a previous field image, a second pixel value of a second pixel point corresponding to the pixel point to be interpolated in a subsequent field image, a third pixel value of a third pixel point in the current field image, a fourth pixel value of a fourth pixel point of the current field image, a fifth pixel value of a fifth pixel point in the previous field image, a sixth pixel value of a sixth pixel point in the previous field image, a seventh pixel value of a seventh pixel point in the subsequent field image and an eighth pixel value of an eighth pixel point in the subsequent field image, wherein the third pixel point is an adjacent pixel point positioned above the pixel point to be interpolated in the current field image, the fourth pixel point is an adjacent pixel point positioned below the pixel point to be interpolated in the current field image, the fifth pixel point is an adjacent pixel point positioned above the third pixel point in the previous field image, and the fifth pixel point is an adjacent pixel point corresponding to the fifth pixel point in the previous field image, and the fifth pixel point is an adjacent pixel point below the third pixel point in the previous field image;
S302, carrying out weighted summation on a first pixel value, a second pixel value, a third pixel value, a fourth pixel value, a fifth pixel value, a sixth pixel value, a seventh pixel value and an eighth pixel value to obtain a first weighted summation result, and taking the first weighted summation result as the pixel value of the pixel point to be interpolated.
The position of the pixel point corresponding to the third pixel point in the previous field image is the same as the position of the third pixel point in the current field image. For example, if the third pixel point is in the ith-1 th row and jth column in the current field image, the pixel point corresponding to the third pixel point in the previous field image is also in the ith-1 th row and jth column in the previous field image.
The position of the pixel point corresponding to the fourth pixel point in the previous field image is the same as the position of the fourth pixel point in the current field image. For example, if the fourth pixel point is in the (i+1) -th row and (j) -th column in the current field image, the pixel point corresponding to the fourth pixel point in the previous field image is also in the (i+1) -th row and (j) -th column in the previous field image.
The position of the pixel point corresponding to the third pixel point in the next field image is the same as the position of the third pixel point in the current field image. For example, if the third pixel point is in the ith-1 th row and jth column in the current field image, the pixel point corresponding to the third pixel point in the subsequent field image is also in the ith-1 th row and jth column in the subsequent field image.
The position of the pixel point corresponding to the fourth pixel point in the next field image is the same as the position of the fourth pixel point in the current field image. For example, if the fourth pixel point is in the (i+1) -th row and (j) -th column in the current field image, the pixel point corresponding to the fourth pixel point in the subsequent field image is also in the (i+1) -th row and (j) -th column in the subsequent field image.
The fifth pixel value of the fifth pixel point comprises YUV channel data of the fifth pixel point, the sixth pixel value of the sixth pixel point comprises YUV channel data of the sixth pixel point, the seventh pixel value of the seventh pixel point comprises YUV channel data of the seventh pixel point, and the eighth pixel value of the eighth pixel point comprises YUV channel data of the eighth pixel point.
In the embodiment of the application, a weight is set for the first pixel value, the second pixel value, the third pixel value, the fourth pixel value, the fifth pixel value, the sixth pixel value, the seventh pixel value and the eighth pixel value respectively, weighted summation is performed according to the corresponding weight, and the first weighted summation result is used as the pixel value of the pixel point to be interpolated. Specifically, the weights corresponding to the first pixel value, the second pixel value, the third pixel value, the fourth pixel value, the fifth pixel value, the sixth pixel value, the seventh pixel value and the eighth pixel value are 0.25,0.25,0.5,0.5, -0.125, -0.125, respectively.
When the pixel point to be interpolated is a motion pixel point, as the position of the motion pixel point in different field images changes, the weighted interpolation processing is performed on the pixel point to be interpolated according to the pixel values of the corresponding pixel points of the adjacent field images of the current field image, the pixel values of the surrounding pixel points of the corresponding pixel points and the pixel values of the surrounding pixel points of the pixel point to be interpolated in the current field image, so that the pixel value of the pixel point to be interpolated is determined, the pixel value of the pixel point to be interpolated is not abrupt, and the blurring and the dithering of the image content of the current field image after interpolation are avoided.
In an alternative embodiment, the field data of the previous field image includes Y channel data of the previous field image and UV channel data of the previous field image, the field data of the next field image includes Y channel data of the next field image and UV channel data of the next field image, and step S30 includes steps S31 to S33, specifically as follows:
S31, if the pixel point to be interpolated of the current field image is a motion pixel point, obtaining Y-channel data of the pixel point to be interpolated according to Y-channel data of a previous field image, Y-channel data of a next field image and Y-channel data of the current field image;
s32, obtaining UV channel data of pixel points to be interpolated according to the UV channel data of the previous field of image and the UV channel data of the next field of image;
S33, obtaining YUV channel data of the pixel point to be interpolated according to the Y channel data and the UV channel data, and taking the YUV channel data as the pixel value of the pixel point to be interpolated.
The Y channel data of the current field image refers to the Y channel data of each pixel except the pixel to be interpolated in the current field image, and the UV channel data of the current field image refers to the UV channel data of each pixel except the pixel to be interpolated in the current field image.
In the embodiment of the present application, step S31 may refer to steps S301 to S302, which are not described herein. In step S32, reference may be made to steps S201 to S202, which are not described herein. And performing channel splicing on the Y channel data of the pixel to be interpolated and the UV channel data of the pixel to be interpolated to obtain YUV channel data of the pixel to be interpolated, and taking the YUV channel data as the pixel value of the pixel to be interpolated.
If the pixel point to be interpolated of the current field image is a motion pixel point, weighting interpolation processing is carried out on the pixel point to be interpolated in the current field image according to Y channel data of the pixel point corresponding to the adjacent field image of the current field image, Y channel data of the surrounding pixel points of the corresponding pixel point and Y channel data of the surrounding pixel point of the pixel point to be interpolated in the current field image, Y channel data of the pixel point to be interpolated is determined, interpolation processing is carried out on the pixel point to be interpolated according to UV channel data of the pixel point corresponding to the adjacent field image of the current field image, and UV channel data of the pixel point to be interpolated is determined, so that pixel values of the pixel point to be interpolated are not abrupt, and blurring and jitter of image content of the current field image after interpolation are avoided.
For step S40, the interpolated current field image is obtained.
In the embodiment of the application, after interpolation processing is carried out on each pixel point to be interpolated in the current field image, each pixel point to be interpolated in the current field image has a pixel value, and the current field image is converted from an interlaced video image to a progressive video image.
In an alternative embodiment, after interpolation, the method further comprises the steps of filtering the pixel value of the pixel to be interpolated, and the method comprises the following steps:
S51, acquiring a pixel value of a pixel point to be interpolated, a pixel value of an adjacent pixel point above the pixel point to be interpolated and a pixel value of an adjacent pixel point below the pixel point to be interpolated;
s52, calculating a fourth absolute value difference value and a fourth average value of the pixel values of the adjacent pixel points above the pixel point to be interpolated and the pixel values of the adjacent pixel points below the pixel point to be interpolated;
S53, calculating a fifth absolute value difference value between the fourth average value and the pixel value of the pixel point to be interpolated;
And S54, if the fourth absolute value difference value is smaller than or equal to the second preset threshold value and the fifth absolute value difference value is smaller than or equal to the second preset threshold value, replacing the pixel value of the pixel point to be interpolated with a fourth average value.
And the fourth absolute value difference value is obtained by calculating the difference value between the pixel value of the adjacent pixel above the pixel to be interpolated and the pixel value of the adjacent pixel below the pixel to be interpolated and taking the absolute value of the difference value. The fourth average value is an average value of the pixel values of the adjacent pixels above the pixel to be interpolated and the pixel values of the adjacent pixels below the pixel to be interpolated. And the fifth absolute value difference is obtained by calculating the difference between the fourth average value and the pixel value of the current pixel point and taking the absolute value of the difference.
The second preset threshold value can be set manually according to actual requirements.
In the embodiment of the present application, if the fourth absolute value difference is smaller than or equal to the second preset threshold value and the fifth absolute value difference is smaller than or equal to the second preset threshold value, the pixel value of the pixel point to be interpolated is replaced with the fourth average value. If the fourth absolute value difference is greater than the second preset threshold, or the fifth absolute value difference is greater than the second preset threshold, the pixel value of the pixel to be interpolated does not need to be corrected.
The pixel point to be interpolated with the abrupt pixel value after interpolation processing is selected through screening, and the fourth average value is used for replacing the pixel value of the pixel point to be interpolated, so that color disorder of image content can be avoided, and image content distortion of the current field image after interpolation is reduced.
The following are examples of the apparatus of the application that may be used to perform the method of the application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the method in the embodiments of the present application.
Referring to fig. 2, a schematic structural diagram of a video de-interlacing processing device according to an embodiment of the present application is shown. The video de-interlacing processing device 6 provided by the embodiment of the application comprises:
a field image obtaining module 61, configured to obtain a current field image, a previous field image of the current field image, and a next field image of the current field image, where the current field image is an image obtained by performing interlace sampling on a video image of a current frame, the previous field image of the current field image is an image obtained by performing interlace sampling on a video image of a previous frame of the current frame, and the next field image of the current field image is an image obtained by performing interlace sampling on a video image of a next frame of the current frame;
The static pixel point interpolation module 62 is configured to interpolate a pixel point to be interpolated according to field data of a previous field image and field data of a next field image if the pixel point to be interpolated of the current field image is a static pixel point;
the motion pixel interpolation module 63 is configured to interpolate the pixel to be interpolated according to the field data of the previous field image, the field data of the next field image, and the field data of the current field image if the pixel to be interpolated of the current field image is a motion pixel;
the interpolated current field image obtaining module 64 is configured to obtain an interpolated current field image.
The method and the device are applied to obtaining a current field image, a previous field image of the current field image and a next field image of the current field image, wherein the current field image is an image obtained by conducting interlacing sampling on a video image of the current frame, the previous field image of the current field image is an image obtained by conducting interlacing sampling on a video image of the previous frame of the current frame, the next field image of the current field image is an image obtained by conducting interlacing sampling on a video image of the next frame of the current frame, interpolation is conducted on the pixel to be interpolated according to field data of the previous field image and field data of the next field image if the pixel to be interpolated is a static pixel, the pixel to be interpolated is a pixel to be filled with a pixel value if the pixel to be interpolated is a motion pixel, and the current field image after interpolation is obtained according to field data of the previous field image, field data of the next field image and field data of the current field image. The method comprises the steps of detecting motion of a pixel point to be interpolated of a current field image, determining whether the pixel point to be interpolated is a static pixel point or a motion pixel point, conducting interpolation processing on the static pixel point according to adjacent field data of the current field image when the pixel point to be interpolated is the static pixel point, and conducting interpolation processing on the motion pixel point according to adjacent field data of the current field image and field data of the current field image when the pixel point to be interpolated is the motion pixel point. By adopting different interpolation processing modes for the static pixel points and the moving pixel points, the interpolation precision of the pixel points to be interpolated in the current field image is improved, and distortion, blurring and jitter of the image content of the interpolated current field image are avoided.
The following are examples of the apparatus of the present application, which may be used to perform the method of the present application. For details not disclosed in the apparatus embodiments of the present application, please refer to the method in the embodiments of the present application.
Referring to fig. 3, the present application further provides an electronic device 300, which includes a parity field discriminator 301, a memory 302, and a data processor 303;
The parity field discriminator 301 is configured to receive a current field image, a previous field image of the current field image, and a next field image of the current field image, and send the current field image, the previous field image of the current field image, and the next field image of the current field image to the memory 302, where the current field image is an image obtained by performing interlace sampling on a video image of a current frame, the previous field image of the current field image is an image obtained by performing interlace sampling on a video image of a previous frame of the current frame, and the next field image of the current field image is an image obtained by performing interlace sampling on a video image of a next frame of the current frame;
The memory 302 is configured to send the current field image, a previous field image of the current field image, and a next field image of the current field image to the data processor 303;
The data processor 303 is configured to interpolate a pixel to be interpolated in the current field image according to the video de-interlacing processing method described above, so as to obtain an interpolated current field image.
The electronic device may be a computer, a mobile phone, a tablet computer, or a Field Programmable gate array (Field-Programmable GATE ARRAY, FPGA) chip.
The parity field discriminator can determine whether the current field image, a previous field image of the current field image, and a next field image of the current field image are odd fields or even fields, and can identify resolutions of the current field image, the previous field image of the current field image, and the next field image of the current field image.
The Memory 302 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 302 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 302 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 302 may include a stored program area that may store instructions for implementing an operating system, instructions for at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, etc., and a stored data area that may store data related to the various method embodiments described above, etc. The memory 302 may also optionally be at least one storage device located remotely from the aforementioned data processor 303. The memory 302, which is a type of computer storage medium, may include an operating system, a network communication module, a user interface module, and an operating application.
Wherein the digital processor 303 may include one or more processing cores. The digital processor 303 connects various parts within the overall electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 302, and invoking data stored in the memory 302. Alternatively, the digital processor 303 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The digital processor 303 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing contents required to be displayed by the display layer, and the modem is used for processing wireless communication. It will be appreciated that the modem may not be integrated into the processor and may be implemented by a single chip.
The digital processor 303 may be configured to invoke an application program of the video de-interlacing processing method stored in the memory 302, and specifically execute the method steps of the foregoing embodiment, and the specific execution process may refer to the specific description shown in the embodiment, which is not repeated herein.
The present application also provides a computer readable storage medium, on which a computer program is stored, the instructions being adapted to be loaded by a processor and to execute the method steps of the above-described embodiments, and the specific execution process may refer to the specific description of the embodiments, which is not repeated herein. The storage medium can be an electronic device such as a personal computer, a notebook computer, a smart phone, a tablet computer and the like.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The above-described apparatus embodiments are merely illustrative, in which components illustrated as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present application without undue burden.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only optical disk read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic tape storage, magnetic disk storage, or any other non-transmission media that can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1.一种视频去隔行处理方法,其特征在于,所述方法包括如下步骤:1. A video deinterlacing processing method, characterized in that the method comprises the following steps: 获取当前场图像、所述当前场图像的前一场图像以及所述当前场图像的后一场图像;其中,所述当前场图像为对当前帧的视频图像进行隔行采样得到的图像,所述当前场图像的前一场图像为对所述当前帧的前一帧视频图像进行隔行采样得到的图像,所述当前场图像的后一场图像为对所述当前帧的后一帧视频图像进行隔行采样得到的图像;Acquire a current field image, a field image before the current field image, and a field image after the current field image; wherein the current field image is an image obtained by performing interlaced sampling on a video image of a current frame, the field image before the current field image is an image obtained by performing interlaced sampling on a video image of a frame before the current frame, and the field image after the current field image is an image obtained by performing interlaced sampling on a video image of a frame after the current frame; 若所述当前场图像的待插值像素点为静止像素点,根据所述前一场图像的场数据以及所述后一场图像的场数据,对所述待插值像素点进行插值;其中,所述待插值像素点为待填充像素值的像素点;If the pixel point to be interpolated of the current field image is a stationary pixel point, interpolate the pixel point to be interpolated according to the field data of the previous field image and the field data of the next field image; wherein the pixel point to be interpolated is a pixel point to be filled with a pixel value; 若所述当前场图像的待插值像素点为运动像素点,根据所述前一场图像的场数据、所述后一场图像的场数据以及所述当前场图像的场数据,对所述待插值像素点进行插值;If the pixel to be interpolated of the current field image is a moving pixel, interpolating the pixel to be interpolated according to the field data of the previous field image, the field data of the next field image and the field data of the current field image; 获得插值后的所述当前场图像。The interpolated current field image is obtained. 2.根据权利要求1所述的视频去隔行处理方法,其特征在于:2. The video deinterlacing processing method according to claim 1, characterized in that: 所述若所述当前场图像的待插值像素点为静止像素点,根据所述前一场图像的场数据以及所述后一场图像的场数据,对所述待插值像素点进行插值的步骤,包括:If the pixel to be interpolated of the current field image is a stationary pixel, the step of interpolating the pixel to be interpolated according to the field data of the previous field image and the field data of the next field image comprises: 若所述当前场图像的待插值像素点为静止像素点,获取所述前一场图像中与所述待插值像素点对应的第一像素点的第一像素值以及所述后一场图像中与所述待插值像素点对应的第二像素点的第二像素值;If the pixel to be interpolated of the current field image is a stationary pixel, obtaining a first pixel value of a first pixel corresponding to the pixel to be interpolated in the previous field image and a second pixel value of a second pixel corresponding to the pixel to be interpolated in the next field image; 计算所述第一像素值与所述第二像素值的第一平均值,将所述第一平均值作为所述待插值像素点的像素值。A first average value of the first pixel value and the second pixel value is calculated, and the first average value is used as the pixel value of the pixel point to be interpolated. 3.根据权利要求1所述的视频去隔行处理方法,其特征在于:3. The video deinterlacing processing method according to claim 1, characterized in that: 所述若所述当前场图像的待插值像素点为运动像素点,根据所述前一场图像的场数据、所述后一场图像的场数据以及所述当前场图像的场数据,对所述待插值像素点的像素值进行插值的步骤,包括:If the pixel to be interpolated of the current field image is a moving pixel, the step of interpolating the pixel value of the pixel to be interpolated according to the field data of the previous field image, the field data of the next field image and the field data of the current field image comprises: 若所述当前场图像的待插值像素点为运动像素点,获取所述前一场图像中与所述待插值像素点对应的第一像素点的第一像素值、所述后一场图像中与所述待插值像素点对应的第二像素点的第二像素值、所述当前场图像中第三像素点的第三像素值、所述当前场图像第四像素点的第四像素值、所述前一场图像中第五像素点的第五像素值、所述前一场图像中第六像素点的第六像素值、所述后一场图像中第七像素点的第七像素值以及所述后一场图像中第八像素点的第八像素值;其中,所述第三像素点为所述当前场图像中位于所述待插值像素点上方的相邻像素点,所述第四像素点为所述当前场图像中位于所述待插值像素点下方的相邻像素点,所述第五像素点为所述前一场图像中与所述第三像素点对应的像素点上方的相邻像素点,所述第六像素点为所述前一场图像中与所述第四像素点对应的像素点下方的相邻像素点,所述第七像素点为所述后一场图像中与所述第三像素点对应的像素点上方的相邻像素点,所述第八像素点为所述后一场图像中与所述第四像素点对应的像素点下方的相邻像素点;If the pixel to be interpolated of the current field image is a moving pixel, obtain a first pixel value of a first pixel corresponding to the pixel to be interpolated in the previous field image, a second pixel value of a second pixel corresponding to the pixel to be interpolated in the next field image, a third pixel value of a third pixel in the current field image, a fourth pixel value of a fourth pixel in the current field image, a fifth pixel value of a fifth pixel in the previous field image, a sixth pixel value of a sixth pixel in the previous field image, a seventh pixel value of a seventh pixel in the next field image, and an eighth pixel value of an eighth pixel in the next field image; wherein, the third pixel is an adjacent pixel point located above the pixel point to be interpolated in the current field image, the fourth pixel point is an adjacent pixel point located below the pixel point to be interpolated in the current field image, the fifth pixel point is an adjacent pixel point above the pixel point corresponding to the third pixel point in the previous field image, the sixth pixel point is an adjacent pixel point below the pixel point corresponding to the fourth pixel point in the previous field image, the seventh pixel point is an adjacent pixel point above the pixel point corresponding to the third pixel point in the next field image, and the eighth pixel point is an adjacent pixel point below the pixel point corresponding to the fourth pixel point in the next field image; 对所述第一像素值、所述第二像素值、所述第三像素值、所述第四像素值、所述第五像素值、所述第六像素值、所述第七像素值以及所述第八像素值进行加权求和,获得第一加权求和结果;将所述第一加权求和结果作为所述待插值像素点的像素值。Perform weighted summation on the first pixel value, the second pixel value, the third pixel value, the fourth pixel value, the fifth pixel value, the sixth pixel value, the seventh pixel value and the eighth pixel value to obtain a first weighted summation result; and use the first weighted summation result as the pixel value of the pixel to be interpolated. 4.根据权利要求1至3中任一项权利要求所述的视频去隔行处理方法,其特征在于:所述获取当前场图像、所述当前场图像的前一场图像以及所述当前场图像的后一场图像的步骤之后,还包括:判断所述当前场图像的待插值像素点的运动状态;所述判断所述当前场图像的待插值像素点的运动状态的步骤,包括:4. The video deinterlacing processing method according to any one of claims 1 to 3 is characterized in that: after the step of acquiring the current field image, the previous field image of the current field image and the next field image of the current field image, it also includes: judging the motion state of the pixel points to be interpolated of the current field image; the step of judging the motion state of the pixel points to be interpolated of the current field image includes: 获取所述前一场图像中与所述待插值像素点对应的第一像素点的第一像素值、所述后一场图像中与所述待插值像素点对应的第二像素点的第二像素值、所述当前场图像中第三像素点的第三像素值以及所述当前场图像第四像素点的第四像素值;其中,所述第三像素点为所述当前场图像中位于所述待插值像素点上方的相邻像素点,所述第四像素点为所述当前场图像中位于所述待插值像素点下方的相邻像素点;Acquire a first pixel value of a first pixel point corresponding to the pixel point to be interpolated in the previous field image, a second pixel value of a second pixel point corresponding to the pixel point to be interpolated in the next field image, a third pixel value of a third pixel point in the current field image, and a fourth pixel value of a fourth pixel point in the current field image; wherein the third pixel point is an adjacent pixel point above the pixel point to be interpolated in the current field image, and the fourth pixel point is an adjacent pixel point below the pixel point to be interpolated in the current field image; 计算所述第一像素值与所述第二像素值的第一绝对值差值以及第二平均值;Calculate a first absolute value difference between the first pixel value and the second pixel value and a second average value; 计算所述第三像素值与所述第四像素值的第二绝对值差值以及第三平均值;Calculating a second absolute value difference between the third pixel value and the fourth pixel value and a third average value; 计算所述第二平均值与所述第三平均值的第三绝对值差值;Calculating a third absolute value difference between the second average value and the third average value; 判断所述第一绝对值差值、所述第二绝对值差值以及所述第三绝对值差值是否均小于或等于第一预设阈值;若是,确定所述当前场图像的待插值像素点为静止像素点;若否,确定所述当前场图像的待插值像素点为运动像素点。Determine whether the first absolute value difference, the second absolute value difference, and the third absolute value difference are all less than or equal to a first preset threshold; if so, determine that the pixel point to be interpolated of the current field image is a stationary pixel point; if not, determine that the pixel point to be interpolated of the current field image is a moving pixel point. 5.根据权利要求1至3中任一项权利要求任意一项所述的视频去隔行处理方法,其特征在于:5. The video deinterlacing processing method according to any one of claims 1 to 3, characterized in that: 所述对所述待插值像素点进行插值之后,还包括:对所述待插值像素点的像素值进行滤波处理;所述对所述待插值像素点的像素值进行滤波处理的步骤,包括:After interpolating the pixel points to be interpolated, the method further includes: filtering the pixel values of the pixel points to be interpolated; the step of filtering the pixel values of the pixel points to be interpolated includes: 获取所述待插值像素点的像素值、所述待插值像素点上方的相邻像素点的像素值以及所述待插值像素点下方的相邻像素点的像素值;Obtaining a pixel value of the pixel to be interpolated, a pixel value of an adjacent pixel above the pixel to be interpolated, and a pixel value of an adjacent pixel below the pixel to be interpolated; 计算所述待插值像素点上方的相邻像素点的像素值与所述待插值像素点下方的相邻像素点的像素值的第四绝对值差值以及第四平均值;Calculating a fourth absolute value difference and a fourth average value of pixel values of adjacent pixel points above the pixel point to be interpolated and pixel values of adjacent pixel points below the pixel point to be interpolated; 计算所述第四平均值与所述待插值像素点的像素值的第五绝对值差值;Calculate a fifth absolute value difference between the fourth average value and the pixel value of the pixel point to be interpolated; 若所述第四绝对值差值小于或等于第二预设阈值,且所述第五绝对值差值小于或等于所述第二预设阈值,将所述待插值像素点的像素值替换为所述第四平均值。If the fourth absolute value difference is less than or equal to the second preset threshold, and the fifth absolute value difference is less than or equal to the second preset threshold, the pixel value of the pixel to be interpolated is replaced with the fourth average value. 6.根据权利要求1至3中任一项权利要求所述的视频去隔行处理方法,其特征在于:6. The video deinterlacing processing method according to any one of claims 1 to 3, characterized in that: 所述前一场图像的场数据包括所述前一场图像的Y通道数据以及所述前一场图像的UV通道数据,所述后一场图像的场数据包括所述后一场图像的Y通道数据以及所述后一场图像的UV通道数据;The field data of the previous image includes Y channel data of the previous image and UV channel data of the previous image, and the field data of the next image includes Y channel data of the next image and UV channel data of the next image; 所述若所述当前场图像的待插值像素点为静止像素点,根据所述前一场图像的场数据以及所述后一场图像的场数据,对所述待插值像素点进行插值的步骤,包括:If the pixel to be interpolated of the current field image is a stationary pixel, the step of interpolating the pixel to be interpolated according to the field data of the previous field image and the field data of the next field image comprises: 若所述当前场图像的待插值像素点为静止像素点,根据所述前一场图像的Y通道数据、所述前一场图像的UV通道数据、所述后一场图像的Y通道数据以及所述后一场图像的UV通道数据,获得所述待插值像素点的YUV通道数据;If the pixel to be interpolated of the current field image is a stationary pixel, obtaining YUV channel data of the pixel to be interpolated according to the Y channel data of the previous field image, the UV channel data of the previous field image, the Y channel data of the next field image, and the UV channel data of the next field image; 将所述YUV通道数据作为所述待插值像素点的像素值。The YUV channel data is used as the pixel value of the pixel to be interpolated. 7.根据权利要求1至3中任一项权利要求所述的视频去隔行处理方法,其特征在于:7. The video deinterlacing processing method according to any one of claims 1 to 3, characterized in that: 所述前一场图像的场数据包括所述前一场图像的Y通道数据以及所述前一场图像的UV通道数据,所述后一场图像的场数据包括所述后一场图像的Y通道数据以及所述后一场图像的UV通道数据;The field data of the previous image includes Y channel data of the previous image and UV channel data of the previous image, and the field data of the next image includes Y channel data of the next image and UV channel data of the next image; 所述若所述当前场图像的待插值像素点为运动像素点,根据所述前一场图像的场数据、所述后一场图像的场数据以及所述当前场图像的场数据,对所述待插值像素点进行插值的步骤,包括:If the pixel to be interpolated of the current field image is a moving pixel, the step of interpolating the pixel to be interpolated according to the field data of the previous field image, the field data of the next field image and the field data of the current field image comprises: 若所述当前场图像的待插值像素点为运动像素点,根据所述前一场图像的Y通道数据、所述后一场图像的Y通道数据以及所述当前场图像的Y通道数据,获得所述待插值像素点的Y通道数据;If the pixel to be interpolated of the current field image is a moving pixel, obtaining Y channel data of the pixel to be interpolated according to the Y channel data of the previous field image, the Y channel data of the next field image and the Y channel data of the current field image; 根据所述前一场图像的UV通道数据以及所述后一场图像的UV通道数据,获得所述待插值像素点的UV通道数据;Obtaining UV channel data of the pixel to be interpolated according to the UV channel data of the previous image and the UV channel data of the next image; 根据所述Y通道数据和所述UV通道数据,获得所述待插值像素点的YUV通道数据;将所述YUV通道数据作为所述待插值像素点的像素值。According to the Y channel data and the UV channel data, the YUV channel data of the pixel to be interpolated is obtained; and the YUV channel data is used as the pixel value of the pixel to be interpolated. 8.一种视频去隔行处理装置,其特征在于,包括:8. A video deinterlacing processing device, comprising: 场图像获取模块,用于获取当前场图像、所述当前场图像的前一场图像以及所述当前场图像的后一场图像;其中,所述当前场图像为对当前帧的视频图像进行隔行采样得到的图像,所述当前场图像的前一场图像为对所述当前帧的前一帧视频图像进行隔行采样得到的图像,所述当前场图像的后一场图像为对所述当前帧的后一帧视频图像进行隔行采样得到的图像;A field image acquisition module, used to acquire a current field image, a field image before the current field image, and a field image after the current field image; wherein the current field image is an image obtained by performing interlaced sampling on a video image of a current frame, the field image before the current field image is an image obtained by performing interlaced sampling on a video image of a frame before the current frame, and the field image after the current field image is an image obtained by performing interlaced sampling on a video image of a frame after the current frame; 静止像素点插值模块,用于若所述当前场图像的待插值像素点为静止像素点,根据所述前一场图像的场数据以及所述后一场图像的场数据,对所述待插值像素点进行插值;其中,所述待插值像素点为待填充像素值的像素点;A stationary pixel interpolation module, used for interpolating the to-be-interpolated pixel of the current field image according to the field data of the previous field image and the field data of the next field image if the to-be-interpolated pixel of the current field image is a stationary pixel; wherein the to-be-interpolated pixel is a pixel to be filled with a pixel value; 运动像素点插值模块,用于若所述当前场图像的待插值像素点为运动像素点,根据所述前一场图像的场数据、所述后一场图像的场数据以及所述当前场图像的场数据,对所述待插值像素点进行插值;A moving pixel interpolation module, configured to interpolate the pixel to be interpolated of the current field image according to the field data of the previous field image, the field data of the next field image and the field data of the current field image if the pixel to be interpolated of the current field image is a moving pixel; 插值后的当前场图像获得模块,用于获得插值后的所述当前场图像。The interpolated current field image obtaining module is used to obtain the interpolated current field image. 9.一种电子设备,其特征在于,包括奇偶场判别器、存储器以及数据处理器;9. An electronic device, characterized in that it comprises an odd-even field discriminator, a memory and a data processor; 所述奇偶场判别器与所述存储器连接,所述存储器与所述数据处理器连接;The odd-even field discriminator is connected to the memory, and the memory is connected to the data processor; 所述奇偶场判别器用于接收当前场图像、所述当前场图像的前一场图像以及所述当前场图像的后一场图像,将所述当前场图像、所述当前场图像的前一场图像以及所述当前场图像的后一场图像均发送至所述存储器;其中,所述当前场图像为对当前帧的视频图像进行隔行采样得到的图像,所述当前场图像的前一场图像为对所述当前帧的前一帧视频图像进行隔行采样得到的图像,所述当前场图像的后一场图像为对所述当前帧的后一帧视频图像进行隔行采样得到的图像;The odd-even field discriminator is used to receive a current field image, a field image before the current field image, and a field image after the current field image, and send the current field image, the field image before the current field image, and the field image after the current field image to the memory; wherein the current field image is an image obtained by performing interlaced sampling on a video image of a current frame, the field image before the current field image is an image obtained by performing interlaced sampling on a video image of a frame before the current frame, and the field image after the current field image is an image obtained by performing interlaced sampling on a video image of a frame after the current frame; 所述存储器用于将所述当前场图像、所述当前场图像的前一场图像以及所述当前场图像的后一场图像发送至所述数据处理器;The memory is used to send the current field image, the previous field image of the current field image and the next field image of the current field image to the data processor; 所述数据处理器用于根据权利要求1至7任意一项所述的视频去隔行处理方法,对所述当前场图像中的待插值像素点进行插值,获得插值后的所述当前场图像。The data processor is used to interpolate the pixel points to be interpolated in the current field image according to the video deinterlacing processing method according to any one of claims 1 to 7, so as to obtain the interpolated current field image. 10.一种计算机可读存储介质,其上储存有计算机程序,其特征在于,该计算机程序被数据处理器执行时实现如权利要求1至7中任意一项所述的视频去隔行处理方法。10. A computer-readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a data processor, the video deinterlacing processing method according to any one of claims 1 to 7 is implemented.
CN202411038041.XA 2024-07-31 2024-07-31 Video deinterlacing processing method, device, electronic device and storage medium Pending CN119182874A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411038041.XA CN119182874A (en) 2024-07-31 2024-07-31 Video deinterlacing processing method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411038041.XA CN119182874A (en) 2024-07-31 2024-07-31 Video deinterlacing processing method, device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN119182874A true CN119182874A (en) 2024-12-24

Family

ID=93900332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411038041.XA Pending CN119182874A (en) 2024-07-31 2024-07-31 Video deinterlacing processing method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN119182874A (en)

Similar Documents

Publication Publication Date Title
CN110377264B (en) Layer synthesis method, device, electronic equipment and storage medium
US20170039682A1 (en) Method and system of demosaicing bayer-type image data for image processing
US20090040246A1 (en) Image processing device, display device, image processing method, and program
US20070047651A1 (en) Video prediction apparatus and method for multi-format codec and video encoding/decoding apparatus and method using the video prediction apparatus and method
US8305489B2 (en) Video conversion apparatus and method, and program
JP2015510372A (en) System and method for image processing
JP5133038B2 (en) Image restoration method and image restoration apparatus
US9215353B2 (en) Image processing device, image processing method, image display device, and image display method
US20020001347A1 (en) Apparatus and method for converting to progressive scanning format
JP5192087B2 (en) Image processing apparatus and image processing method
US8412003B2 (en) Image processing methods, apparatus and computer program products using interdependent pixel interpolation operations
US20100039517A1 (en) Film cadence detection
JPWO2017203941A1 (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM
CN119182874A (en) Video deinterlacing processing method, device, electronic device and storage medium
JP2010011108A (en) Image processing apparatus and method, and program
US8013935B2 (en) Picture processing circuit and picture processing method
US20100260435A1 (en) Edge Directed Image Processing
CN107071326B (en) Video processing method and device
US10015513B2 (en) Image processing apparatus and image processing method thereof
JP4250807B2 (en) Field frequency conversion device and conversion method
WO2014034170A1 (en) Image processing apparatus and image processing method
JP5018198B2 (en) Interpolation signal generation circuit, interpolation signal generation method, program, and video signal processing apparatus
JP2014033357A (en) Video signal processor and video signal processing method
JP5762006B2 (en) Image processing apparatus and image processing method
JP2007074588A (en) Image processor and processing method, program and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination