Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application as detailed in the accompanying claims. In the description of the present application, it should be understood that the terms "first," "second," "third," and the like are used merely to distinguish between similar objects and are not necessarily used to describe a particular order or sequence, nor should they be construed to indicate or imply relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
Furthermore, in the description of the present application, unless otherwise indicated, "a plurality" means two or more. "and/or" describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate that there are three cases of a alone, a and B together, and B alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The video de-interlacing processing method provided by the embodiment of the application can be applied to scenes of converting interlaced video into progressive video. In the related art, interpolation is performed on an even line in an odd field for an odd field in interlaced video, which is to copy the data of the last line of the even line to the even line. For even fields in interlaced video, the odd lines in the even fields are interpolated by copying the next line data for the odd lines to the odd lines.
The inventor finds that the copy line processing method in the related art can cause problems of image content distortion, blurring, jitter and the like of the interlaced video after interpolation processing in the process of realizing the invention.
The application detects motion of pixel point to be interpolated of current field image, determines whether pixel point to be interpolated is static pixel point or motion pixel point, carries out interpolation process to static pixel point according to adjacent field data of current field image when pixel point to be interpolated is static pixel point, carries out interpolation process to motion pixel point according to adjacent field data of current field image and field data of current field image when pixel point to be interpolated is motion pixel point. By adopting different interpolation processing modes for the static pixel points and the moving pixel points, the interpolation precision of the pixel points to be interpolated in the current field image is improved, and distortion, blurring and jitter of the image content of the interpolated current field image are avoided.
Fig. 1 is a flow chart of a video de-interlacing processing method according to an embodiment of the application. The embodiment of the application provides a video de-interlacing processing method, which comprises the following steps:
s10, acquiring a current field image, a previous field image of the current field image and a next field image of the current field image, wherein the current field image is an image obtained by conducting interlacing sampling on a video image of a current frame, the previous field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a previous frame of the current frame, and the next field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a next frame of the current frame;
s20, if the pixel point to be interpolated of the current field image is a static pixel point, interpolating the pixel point to be interpolated according to the field data of the previous field image and the field data of the next field image, wherein the pixel point to be interpolated is a pixel point of a pixel value to be filled;
s30, if the pixel point to be interpolated of the current field image is a motion pixel point, interpolating the pixel point to be interpolated according to the field data of the previous field image, the field data of the next field image and the field data of the current field image;
And S40, obtaining the interpolated current field image.
The video de-interlacing processing method provided by the application comprises the steps of performing motion detection on a pixel to be interpolated of a current field image, determining whether the pixel to be interpolated is a static pixel or a motion pixel, performing interpolation processing on the static pixel according to adjacent field data of the current field image when the pixel to be interpolated is the static pixel, and performing interpolation processing on the motion pixel according to the adjacent field data of the current field image and the field data of the current field image when the pixel to be interpolated is the motion pixel. By adopting different interpolation processing modes for the static pixel points and the moving pixel points, the interpolation precision of the pixel points to be interpolated in the current field image is improved, and distortion, blurring and jitter of the image content of the interpolated current field image are avoided.
The following embodiments of the present application use a computer as an execution body to describe each step of a video de-interlacing processing method.
For step S10, a current field image, a previous field image of the current field image and a next field image of the current field image are obtained, wherein the current field image is an image obtained by conducting interlacing sampling on a video image of a current frame, the previous field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a previous frame of the current frame, and the next field image of the current field image is an image obtained by conducting interlacing sampling on a video image of a next frame of the current frame.
The current field image, the previous field image of the current field image and the next field image of the current field image are all interlaced video images. Specifically, if the current field image is an odd field image, the previous field image of the current field image is an even field image, and the next field image of the current field image is an even field image. If the current field image is an even field image, the previous field image of the current field image is an odd field image, and the next field image of the current field image is an odd field image.
The interlaced video image can be an RGB format image or a YUV format image. If the interlaced video image is an RGB format image, the RGB format image may be converted into a YUV format image.
In the embodiment of the application, the current field image, the previous field image of the current field image and the next field image of the current field image can be directly acquired. Or firstly, acquiring continuous three-frame video images, and carrying out interlacing sampling processing on the continuous three-frame video images to respectively acquire a current field image, a previous field image of the current field image and a next field image of the current field image.
In an alternative embodiment, after step S10, the method further comprises the steps of judging the motion state of the pixel to be interpolated of the current field image, and judging the motion state of the pixel to be interpolated of the current field image, wherein the steps comprise:
s101, acquiring a first pixel value of a first pixel corresponding to a pixel to be interpolated in a previous field image, a second pixel value of a second pixel corresponding to the pixel to be interpolated in a subsequent field image, a third pixel value of a third pixel in a current field image and a fourth pixel value of a fourth pixel in the current field image, wherein the third pixel is an adjacent pixel above the pixel to be interpolated in the current field image, and the fourth pixel is an adjacent pixel below the pixel to be interpolated in the current field image;
S102, calculating a first absolute value difference value and a second average value of the first pixel value and the second pixel value;
s103, calculating a second absolute value difference value and a third average value of the third pixel value and the fourth pixel value;
S104, calculating a third absolute value difference value between the second average value and the third average value;
S105, judging whether the first absolute value difference value, the second absolute value difference value and the third absolute value difference value are smaller than or equal to a first preset threshold value, if so, determining that the pixel point to be interpolated of the current field image is a static pixel point, and if not, determining that the pixel point to be interpolated of the current field image is a motion pixel point.
The position of the first pixel point in the previous field image is the same as the position of the pixel point to be interpolated in the current field image. For example, if the pixel point to be interpolated is in the ith row and jth column in the current field image, the first pixel point is also in the ith row and jth column in the previous field image.
The position of the second pixel point in the later field image is the same as the position of the pixel point to be interpolated in the current field image. For example, if the pixel point to be interpolated is in the ith row and jth column in the current field image, the second pixel point is also in the ith row and jth column in the subsequent field image.
The first pixel value of the first pixel point comprises YUV channel data of the first pixel point, the second pixel value of the second pixel point comprises YUV channel data of the second pixel point, the third pixel value of the third pixel point comprises YUV channel data of the third pixel point, and the fourth pixel value of the fourth pixel point comprises YUV channel data of the fourth pixel point. The YUV channel data includes luminance values (Y channel data) and chrominance values (UV channel data).
The first absolute value difference is obtained by taking the absolute value of the difference between the first pixel value and the second pixel value. The second average value is obtained by averaging the first pixel value and the second pixel value. The second absolute value difference is obtained by taking the absolute value of the difference between the third pixel value and the fourth pixel value. The third average value is obtained by averaging the third pixel value and the fourth pixel value. And the third absolute value difference value is obtained by taking the absolute value of the difference value by taking the difference value as the difference between the second average value and the third average value.
The first preset threshold value can be set manually according to actual requirements. For example, the first preset threshold is 5.
In the embodiment of the application, if the first absolute value difference value, the second absolute value difference value and the third absolute value difference value are all smaller than or equal to the first preset threshold value, the pixel point to be interpolated is determined to be a static pixel point. And if the first absolute value difference value is larger than a first preset threshold value, or the second absolute value difference value is larger than the first preset threshold value, or the third absolute value difference value is larger than the first preset threshold value, determining the pixel point to be interpolated as a motion pixel point.
And determining whether the pixel point to be interpolated in the current field image is in a motion state or not by comparing the pixel difference between the adjacent field images of the current field image, the pixel difference between the upper line and the lower line in the current field image and the difference between the pixel average value of the adjacent field image of the current field image and the pixel average value of the upper line and the lower line in the current field image, thereby improving the accuracy and the reliability of motion state detection.
For step S20, if the pixel point to be interpolated of the current field image is a still pixel point, interpolating the pixel point to be interpolated according to the field data of the previous field image and the field data of the next field image, wherein the pixel point to be interpolated is a pixel point of a pixel value to be filled.
The field data of the previous field image refers to pixel values of each pixel point in the previous field image, and includes YUV channel data of each pixel point in the previous field image.
The field data of the next field image refers to pixel values of each pixel point in the next field image, and includes YUV channel data of each pixel point in the next field image.
The pixel point to be interpolated of the current field image refers to a pixel point to be filled with a pixel value in the current field image. Specifically, if the current field image is an odd field image, the pixel points in all even lines of the odd field image are pixel points to be interpolated. If the current field image is an even field image, the pixel points in all odd lines of the even field image are pixel points to be interpolated.
In the embodiment of the application, after the pixel point to be interpolated is determined to be a static pixel point, the pixel point to be interpolated is subjected to interpolation processing according to the field data of the previous field image and the field data of the next field image, so as to obtain the pixel value of the pixel point to be interpolated.
In an alternative embodiment, step S20 includes steps S201 to S202, which are specifically as follows:
s201, if the pixel point to be interpolated of the current field image is a static pixel point, acquiring a first pixel value of a first pixel point corresponding to the pixel point to be interpolated in the previous field image and a second pixel value of a second pixel point corresponding to the pixel point to be interpolated in the next field image;
s202, calculating a first average value of the first pixel value and the second pixel value, and taking the first average value as the pixel value of the pixel point to be interpolated.
The first average value is obtained by averaging the first pixel value and the second pixel value.
In the embodiment of the application, if the pixel to be interpolated is a stationary pixel, an average value of the first pixel and the second pixel value of the second pixel is used as the pixel value of the pixel to be interpolated.
When the pixel point to be interpolated is a static pixel point, the position of the static pixel point in different field images is kept unchanged, so that the static pixel point can be interpolated according to the pixel value of the corresponding pixel point in the adjacent field image of the current field image, and the blurring of the image content of the current field image after interpolation is avoided.
In an alternative embodiment, the field data of the previous field image includes the Y channel data of the previous field image and the UV channel data of the previous field image, the field data of the next field image includes the Y channel data of the next field image and the UV channel data of the next field image, and the step S20 includes steps S21-S22, specifically as follows:
S21, if the pixel point to be interpolated of the current field image is a static pixel point, obtaining YUV channel data of the pixel point to be interpolated according to Y channel data of a previous field image, UV channel data of the previous field image, Y channel data of the next field image and UV channel data of the next field image;
And S22, taking the YUV channel data as the pixel value of the pixel point to be interpolated.
Wherein, Y channel data is luminance value, and UV channel data is chromaticity value.
The Y channel data of the previous field image refers to the luminance value of each pixel point in the previous field image, and the UV channel data of the previous field image refers to the chrominance value of each pixel point in the previous field image.
The Y channel data of the subsequent field image refers to the luminance value of each pixel point in the subsequent field image, and the UV channel data of the subsequent field image refers to the chrominance value of each pixel point in the subsequent field image.
In the embodiment of the present application, if the pixel point to be interpolated of the current field image is a still pixel point, step S21 may refer to steps S201 to S202 to obtain Y channel data and UV channel data of the pixel point to be interpolated, which are not described herein. And splicing the Y channel data and the UV channel data into YUV channel data, and taking the YUV channel data as the pixel value of the pixel point to be interpolated.
If the pixel point to be interpolated of the current field image is a static pixel point, because the positions of the static pixel point in different field images are kept unchanged, the pixel point to be interpolated can be interpolated according to Y channel data and UV channel data of corresponding pixel points in adjacent field images of the current field image, and the Y channel data and the UV channel data of the pixel point to be interpolated are obtained, so that blurring of image contents of the interpolated current field image is avoided.
For step S30, if the pixel to be interpolated of the current field image is a motion pixel, the pixel to be interpolated is interpolated according to the field data of the previous field image, the field data of the next field image, and the field data of the current field image.
The field data of the current field image refers to pixel values of all pixels except the pixel to be interpolated in the current field image, and includes YUV channel data of all pixels except the pixel to be interpolated in the current field image.
In the embodiment of the application, after the pixel point to be interpolated is determined to be a motion pixel point, the pixel point to be interpolated is subjected to interpolation processing according to the field data of the previous field image, the field data of the current field image and the field data of the next field image, so as to obtain the pixel value of the pixel point to be interpolated.
In an alternative embodiment, step S30 includes steps S301 to S302, which are specifically as follows:
S301, if a pixel point to be interpolated of a current field image is a motion pixel point, acquiring a first pixel value of a first pixel point corresponding to the pixel point to be interpolated in a previous field image, a second pixel value of a second pixel point corresponding to the pixel point to be interpolated in a subsequent field image, a third pixel value of a third pixel point in the current field image, a fourth pixel value of a fourth pixel point of the current field image, a fifth pixel value of a fifth pixel point in the previous field image, a sixth pixel value of a sixth pixel point in the previous field image, a seventh pixel value of a seventh pixel point in the subsequent field image and an eighth pixel value of an eighth pixel point in the subsequent field image, wherein the third pixel point is an adjacent pixel point positioned above the pixel point to be interpolated in the current field image, the fourth pixel point is an adjacent pixel point positioned below the pixel point to be interpolated in the current field image, the fifth pixel point is an adjacent pixel point positioned above the third pixel point in the previous field image, and the fifth pixel point is an adjacent pixel point corresponding to the fifth pixel point in the previous field image, and the fifth pixel point is an adjacent pixel point below the third pixel point in the previous field image;
S302, carrying out weighted summation on a first pixel value, a second pixel value, a third pixel value, a fourth pixel value, a fifth pixel value, a sixth pixel value, a seventh pixel value and an eighth pixel value to obtain a first weighted summation result, and taking the first weighted summation result as the pixel value of the pixel point to be interpolated.
The position of the pixel point corresponding to the third pixel point in the previous field image is the same as the position of the third pixel point in the current field image. For example, if the third pixel point is in the ith-1 th row and jth column in the current field image, the pixel point corresponding to the third pixel point in the previous field image is also in the ith-1 th row and jth column in the previous field image.
The position of the pixel point corresponding to the fourth pixel point in the previous field image is the same as the position of the fourth pixel point in the current field image. For example, if the fourth pixel point is in the (i+1) -th row and (j) -th column in the current field image, the pixel point corresponding to the fourth pixel point in the previous field image is also in the (i+1) -th row and (j) -th column in the previous field image.
The position of the pixel point corresponding to the third pixel point in the next field image is the same as the position of the third pixel point in the current field image. For example, if the third pixel point is in the ith-1 th row and jth column in the current field image, the pixel point corresponding to the third pixel point in the subsequent field image is also in the ith-1 th row and jth column in the subsequent field image.
The position of the pixel point corresponding to the fourth pixel point in the next field image is the same as the position of the fourth pixel point in the current field image. For example, if the fourth pixel point is in the (i+1) -th row and (j) -th column in the current field image, the pixel point corresponding to the fourth pixel point in the subsequent field image is also in the (i+1) -th row and (j) -th column in the subsequent field image.
The fifth pixel value of the fifth pixel point comprises YUV channel data of the fifth pixel point, the sixth pixel value of the sixth pixel point comprises YUV channel data of the sixth pixel point, the seventh pixel value of the seventh pixel point comprises YUV channel data of the seventh pixel point, and the eighth pixel value of the eighth pixel point comprises YUV channel data of the eighth pixel point.
In the embodiment of the application, a weight is set for the first pixel value, the second pixel value, the third pixel value, the fourth pixel value, the fifth pixel value, the sixth pixel value, the seventh pixel value and the eighth pixel value respectively, weighted summation is performed according to the corresponding weight, and the first weighted summation result is used as the pixel value of the pixel point to be interpolated. Specifically, the weights corresponding to the first pixel value, the second pixel value, the third pixel value, the fourth pixel value, the fifth pixel value, the sixth pixel value, the seventh pixel value and the eighth pixel value are 0.25,0.25,0.5,0.5, -0.125, -0.125, respectively.
When the pixel point to be interpolated is a motion pixel point, as the position of the motion pixel point in different field images changes, the weighted interpolation processing is performed on the pixel point to be interpolated according to the pixel values of the corresponding pixel points of the adjacent field images of the current field image, the pixel values of the surrounding pixel points of the corresponding pixel points and the pixel values of the surrounding pixel points of the pixel point to be interpolated in the current field image, so that the pixel value of the pixel point to be interpolated is determined, the pixel value of the pixel point to be interpolated is not abrupt, and the blurring and the dithering of the image content of the current field image after interpolation are avoided.
In an alternative embodiment, the field data of the previous field image includes Y channel data of the previous field image and UV channel data of the previous field image, the field data of the next field image includes Y channel data of the next field image and UV channel data of the next field image, and step S30 includes steps S31 to S33, specifically as follows:
S31, if the pixel point to be interpolated of the current field image is a motion pixel point, obtaining Y-channel data of the pixel point to be interpolated according to Y-channel data of a previous field image, Y-channel data of a next field image and Y-channel data of the current field image;
s32, obtaining UV channel data of pixel points to be interpolated according to the UV channel data of the previous field of image and the UV channel data of the next field of image;
S33, obtaining YUV channel data of the pixel point to be interpolated according to the Y channel data and the UV channel data, and taking the YUV channel data as the pixel value of the pixel point to be interpolated.
The Y channel data of the current field image refers to the Y channel data of each pixel except the pixel to be interpolated in the current field image, and the UV channel data of the current field image refers to the UV channel data of each pixel except the pixel to be interpolated in the current field image.
In the embodiment of the present application, step S31 may refer to steps S301 to S302, which are not described herein. In step S32, reference may be made to steps S201 to S202, which are not described herein. And performing channel splicing on the Y channel data of the pixel to be interpolated and the UV channel data of the pixel to be interpolated to obtain YUV channel data of the pixel to be interpolated, and taking the YUV channel data as the pixel value of the pixel to be interpolated.
If the pixel point to be interpolated of the current field image is a motion pixel point, weighting interpolation processing is carried out on the pixel point to be interpolated in the current field image according to Y channel data of the pixel point corresponding to the adjacent field image of the current field image, Y channel data of the surrounding pixel points of the corresponding pixel point and Y channel data of the surrounding pixel point of the pixel point to be interpolated in the current field image, Y channel data of the pixel point to be interpolated is determined, interpolation processing is carried out on the pixel point to be interpolated according to UV channel data of the pixel point corresponding to the adjacent field image of the current field image, and UV channel data of the pixel point to be interpolated is determined, so that pixel values of the pixel point to be interpolated are not abrupt, and blurring and jitter of image content of the current field image after interpolation are avoided.
For step S40, the interpolated current field image is obtained.
In the embodiment of the application, after interpolation processing is carried out on each pixel point to be interpolated in the current field image, each pixel point to be interpolated in the current field image has a pixel value, and the current field image is converted from an interlaced video image to a progressive video image.
In an alternative embodiment, after interpolation, the method further comprises the steps of filtering the pixel value of the pixel to be interpolated, and the method comprises the following steps:
S51, acquiring a pixel value of a pixel point to be interpolated, a pixel value of an adjacent pixel point above the pixel point to be interpolated and a pixel value of an adjacent pixel point below the pixel point to be interpolated;
s52, calculating a fourth absolute value difference value and a fourth average value of the pixel values of the adjacent pixel points above the pixel point to be interpolated and the pixel values of the adjacent pixel points below the pixel point to be interpolated;
S53, calculating a fifth absolute value difference value between the fourth average value and the pixel value of the pixel point to be interpolated;
And S54, if the fourth absolute value difference value is smaller than or equal to the second preset threshold value and the fifth absolute value difference value is smaller than or equal to the second preset threshold value, replacing the pixel value of the pixel point to be interpolated with a fourth average value.
And the fourth absolute value difference value is obtained by calculating the difference value between the pixel value of the adjacent pixel above the pixel to be interpolated and the pixel value of the adjacent pixel below the pixel to be interpolated and taking the absolute value of the difference value. The fourth average value is an average value of the pixel values of the adjacent pixels above the pixel to be interpolated and the pixel values of the adjacent pixels below the pixel to be interpolated. And the fifth absolute value difference is obtained by calculating the difference between the fourth average value and the pixel value of the current pixel point and taking the absolute value of the difference.
The second preset threshold value can be set manually according to actual requirements.
In the embodiment of the present application, if the fourth absolute value difference is smaller than or equal to the second preset threshold value and the fifth absolute value difference is smaller than or equal to the second preset threshold value, the pixel value of the pixel point to be interpolated is replaced with the fourth average value. If the fourth absolute value difference is greater than the second preset threshold, or the fifth absolute value difference is greater than the second preset threshold, the pixel value of the pixel to be interpolated does not need to be corrected.
The pixel point to be interpolated with the abrupt pixel value after interpolation processing is selected through screening, and the fourth average value is used for replacing the pixel value of the pixel point to be interpolated, so that color disorder of image content can be avoided, and image content distortion of the current field image after interpolation is reduced.
The following are examples of the apparatus of the application that may be used to perform the method of the application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the method in the embodiments of the present application.
Referring to fig. 2, a schematic structural diagram of a video de-interlacing processing device according to an embodiment of the present application is shown. The video de-interlacing processing device 6 provided by the embodiment of the application comprises:
a field image obtaining module 61, configured to obtain a current field image, a previous field image of the current field image, and a next field image of the current field image, where the current field image is an image obtained by performing interlace sampling on a video image of a current frame, the previous field image of the current field image is an image obtained by performing interlace sampling on a video image of a previous frame of the current frame, and the next field image of the current field image is an image obtained by performing interlace sampling on a video image of a next frame of the current frame;
The static pixel point interpolation module 62 is configured to interpolate a pixel point to be interpolated according to field data of a previous field image and field data of a next field image if the pixel point to be interpolated of the current field image is a static pixel point;
the motion pixel interpolation module 63 is configured to interpolate the pixel to be interpolated according to the field data of the previous field image, the field data of the next field image, and the field data of the current field image if the pixel to be interpolated of the current field image is a motion pixel;
the interpolated current field image obtaining module 64 is configured to obtain an interpolated current field image.
The method and the device are applied to obtaining a current field image, a previous field image of the current field image and a next field image of the current field image, wherein the current field image is an image obtained by conducting interlacing sampling on a video image of the current frame, the previous field image of the current field image is an image obtained by conducting interlacing sampling on a video image of the previous frame of the current frame, the next field image of the current field image is an image obtained by conducting interlacing sampling on a video image of the next frame of the current frame, interpolation is conducted on the pixel to be interpolated according to field data of the previous field image and field data of the next field image if the pixel to be interpolated is a static pixel, the pixel to be interpolated is a pixel to be filled with a pixel value if the pixel to be interpolated is a motion pixel, and the current field image after interpolation is obtained according to field data of the previous field image, field data of the next field image and field data of the current field image. The method comprises the steps of detecting motion of a pixel point to be interpolated of a current field image, determining whether the pixel point to be interpolated is a static pixel point or a motion pixel point, conducting interpolation processing on the static pixel point according to adjacent field data of the current field image when the pixel point to be interpolated is the static pixel point, and conducting interpolation processing on the motion pixel point according to adjacent field data of the current field image and field data of the current field image when the pixel point to be interpolated is the motion pixel point. By adopting different interpolation processing modes for the static pixel points and the moving pixel points, the interpolation precision of the pixel points to be interpolated in the current field image is improved, and distortion, blurring and jitter of the image content of the interpolated current field image are avoided.
The following are examples of the apparatus of the present application, which may be used to perform the method of the present application. For details not disclosed in the apparatus embodiments of the present application, please refer to the method in the embodiments of the present application.
Referring to fig. 3, the present application further provides an electronic device 300, which includes a parity field discriminator 301, a memory 302, and a data processor 303;
The parity field discriminator 301 is configured to receive a current field image, a previous field image of the current field image, and a next field image of the current field image, and send the current field image, the previous field image of the current field image, and the next field image of the current field image to the memory 302, where the current field image is an image obtained by performing interlace sampling on a video image of a current frame, the previous field image of the current field image is an image obtained by performing interlace sampling on a video image of a previous frame of the current frame, and the next field image of the current field image is an image obtained by performing interlace sampling on a video image of a next frame of the current frame;
The memory 302 is configured to send the current field image, a previous field image of the current field image, and a next field image of the current field image to the data processor 303;
The data processor 303 is configured to interpolate a pixel to be interpolated in the current field image according to the video de-interlacing processing method described above, so as to obtain an interpolated current field image.
The electronic device may be a computer, a mobile phone, a tablet computer, or a Field Programmable gate array (Field-Programmable GATE ARRAY, FPGA) chip.
The parity field discriminator can determine whether the current field image, a previous field image of the current field image, and a next field image of the current field image are odd fields or even fields, and can identify resolutions of the current field image, the previous field image of the current field image, and the next field image of the current field image.
The Memory 302 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 302 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 302 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 302 may include a stored program area that may store instructions for implementing an operating system, instructions for at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, etc., and a stored data area that may store data related to the various method embodiments described above, etc. The memory 302 may also optionally be at least one storage device located remotely from the aforementioned data processor 303. The memory 302, which is a type of computer storage medium, may include an operating system, a network communication module, a user interface module, and an operating application.
Wherein the digital processor 303 may include one or more processing cores. The digital processor 303 connects various parts within the overall electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 302, and invoking data stored in the memory 302. Alternatively, the digital processor 303 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The digital processor 303 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing contents required to be displayed by the display layer, and the modem is used for processing wireless communication. It will be appreciated that the modem may not be integrated into the processor and may be implemented by a single chip.
The digital processor 303 may be configured to invoke an application program of the video de-interlacing processing method stored in the memory 302, and specifically execute the method steps of the foregoing embodiment, and the specific execution process may refer to the specific description shown in the embodiment, which is not repeated herein.
The present application also provides a computer readable storage medium, on which a computer program is stored, the instructions being adapted to be loaded by a processor and to execute the method steps of the above-described embodiments, and the specific execution process may refer to the specific description of the embodiments, which is not repeated herein. The storage medium can be an electronic device such as a personal computer, a notebook computer, a smart phone, a tablet computer and the like.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The above-described apparatus embodiments are merely illustrative, in which components illustrated as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present application without undue burden.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only optical disk read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic tape storage, magnetic disk storage, or any other non-transmission media that can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.