CN112581416A - Edge fusion processing and control system and method for playing video - Google Patents
Edge fusion processing and control system and method for playing video Download PDFInfo
- Publication number
- CN112581416A CN112581416A CN202011455035.6A CN202011455035A CN112581416A CN 112581416 A CN112581416 A CN 112581416A CN 202011455035 A CN202011455035 A CN 202011455035A CN 112581416 A CN112581416 A CN 112581416A
- Authority
- CN
- China
- Prior art keywords
- video
- edge
- width value
- primary
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007499 fusion processing Methods 0.000 title claims abstract description 37
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000004927 fusion Effects 0.000 claims abstract description 59
- 238000012545 processing Methods 0.000 claims description 51
- 238000012937 correction Methods 0.000 claims description 24
- 238000000605 extraction Methods 0.000 claims description 22
- 238000002156 mixing Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 4
- 230000007704 transition Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 230000000007 visual effect Effects 0.000 description 7
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 101100136092 Drosophila melanogaster peng gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Controls And Circuits For Display Device (AREA)
- Image Processing (AREA)
Abstract
The invention provides an edge fusion processing and control system and method for playing video, wherein the method comprises the following steps: receiving a video signal sent by a video acquisition end, and dividing a video image corresponding to the video signal into a plurality of video image blocks; performing primary edge parameter fusion processing on a video image block to obtain a primary video image subjected to primary edge fusion processing; correspondingly sending signals corresponding to the primary video images to a plurality of video players, wherein the plurality of video players display the primary video images corresponding to the received video signals; acquiring a video playing integral image of a plurality of video players for displaying a primary video image, and performing secondary fusion of edge parameters according to the video playing integral image to obtain a secondary video image after secondary fusion; and correspondingly sending the signals corresponding to the secondary video images to a plurality of video players, and displaying the secondary video images corresponding to the received video signals by the plurality of video players. The system may include modules corresponding to the steps of the method.
Description
Technical Field
The invention provides an edge fusion processing and control system and method for playing a video, and belongs to the technical field of image processing.
Background
The application of edge fusion is derived from a military analog simulation/stereoscopic cinema system, and a large-size and high-resolution picture is split, deformed and displayed by a plurality of display devices after a fusion band is added through software or special hardware. The display devices can be used together to form a large-size high-resolution picture (for example, a large-size high-resolution picture which cannot be projected by a single projector can be projected and spliced together by a plurality of projectors). The pursuit of the display effects of bright and overlarge pictures, pure colors and high resolution has been a potential requirement of people on visual perception. The requirements for large-picture, multicolor, high-brightness and high-resolution display effects are more and more strong due to the establishment of a command monitoring center and a network management center, and the implementation of a video conference, an academic report, a technical lecture and a multifunctional conference room. The rapidly growing digital edge fusion large screen display technology is gradually meeting the requirement. With the continuous development and innovation of projection display technology, and the improvement of appreciation level of people, the display with super-large picture, high brightness and higher resolution becomes the urgent need of the market. At that time, in the existing edge blending mode, under the condition of an ultra-large picture and a high-resolution video, the problem of uneven edge chromaticity and brightness often occurs in a plurality of display devices.
Disclosure of Invention
The invention provides an edge fusion processing and control system and method for playing video, which are used for solving the problem of uneven edge chroma and brightness:
the invention provides an edge fusion processing and control method for playing a video, which comprises the following steps:
receiving a video signal sent by a video acquisition end, and dividing a video image corresponding to the video signal into a plurality of video image blocks;
performing primary edge parameter fusion processing on the video image blocks to obtain primary video images subjected to primary edge fusion processing;
correspondingly sending the signals corresponding to the primary video images to a plurality of video players, wherein the plurality of video players display the primary video images corresponding to the received video signals;
acquiring a video playing integral image of the plurality of video players for displaying the primary video image, and performing secondary fusion of edge parameters according to the video playing integral image to obtain a secondary video image after secondary fusion;
and correspondingly sending the signals corresponding to the secondary video images to a plurality of video players, wherein the plurality of video players display the secondary video images corresponding to the received video signals.
Further, performing primary edge parameter fusion processing on the video image block to obtain a primary video image after the primary edge fusion processing, including:
performing edge image extraction at different positions twice on each video image block to obtain a primary edge image area and a secondary edge image area corresponding to the video image blocks;
and according to the distribution relation of the distribution sequence of the video image blocks, carrying out geometric correction, color information unification and light leakage compensation treatment on the edge image area in sequence.
Further, performing two times of edge image extraction at different positions on each of the video image blocks to obtain a primary edge image area and a secondary edge image area corresponding to the video image blocks, including:
acquiring side length information of the video image block, and acquiring a region width value of a primary edge image region by using the side length information through the following formula:
wherein D is1A region width value representing a primary edge image region; l denotes the length value of a video image block, D denotes the width value of a video image block, alpha1And alpha2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.33-0.57; alpha is alpha2The value range of (A) is 1.12-1.24;
acquiring the area width value of the secondary edge image area by using the area width value of the edge image area and the area width value of the edge image area through the following formula;
wherein D is2A region width value representing a secondary edge image region; beta is a1And beta2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.32-0.48; beta is a2The value range of (A) is 2.76-2.93;
and intercepting the primary edge image area and the secondary edge image area according to the area width value of the primary edge image area and the area width value of the secondary edge image area.
Further, acquiring a video playing whole image of the plurality of video players displaying the primary video image, and performing secondary fusion of edge parameters according to the video playing whole image to obtain a secondary video image after secondary fusion, including:
determining the chromaticity difference and the brightness difference of a preset edge area between two adjacent video image blocks according to the collected video playing integral image and the preset edge area of each display screen aiming at the video playing integral image; the width value of the preset edge area is greater than the width value of the primary edge image area and is less than the sum of the width value of the primary edge image area and the width value of the secondary edge image area;
if the chroma difference and the brightness difference do not exceed the difference threshold value, or one index of the chroma difference and the brightness difference exceeds the difference threshold value, secondary geometric correction, color information unification and light leakage compensation processing are directly carried out according to the primary edge image area and the secondary edge image area;
when the chrominance difference and the luminance difference of the edge areas of the two video image blocks exceed a difference threshold, the width value of the primary edge image area and the width value of the secondary edge image area are adjusted by combining a preset edge area, and the adjusted width value of the primary edge image area and the adjusted width value of the secondary edge image area are obtained;
and extracting the adjusted primary edge image area and the adjusted secondary edge image area according to the width value of the adjusted primary edge image area and the width value of the adjusted secondary edge image area, and performing secondary geometric correction, color information unification and light leakage compensation on the adjusted primary edge image area and the adjusted secondary edge image area.
Further, the width value of the adjusted primary edge image area and the width value of the secondary edge image area are obtained through the following formulas:
wherein D ist1Representing the region width value of the adjusted primary edge image region; l represents the length value of the video image block, D represents the width value of the video image block, and D0 represents the width value of the preset edge area; alpha is alpha1And alpha2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.33-0.57; alpha is alpha2The value range of (A) is 1.12-1.24;
wherein D ist2Representing the region width value of the adjusted secondary edge image region; beta is a1And beta2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.32-0.48; beta is a2The value range of (A) is 2.76-2.93.
An edge blending processing and control system for playing video, the system comprising:
the segmentation module is used for receiving a video signal sent by a video acquisition end and segmenting a video image corresponding to the video signal into a plurality of video image blocks;
the primary fusion module is used for performing primary edge parameter fusion processing on the video image blocks to obtain primary video images subjected to primary edge fusion processing;
the video playing module is used for correspondingly sending the signals corresponding to the primary video images to a plurality of video players, and the plurality of video players display the primary video images corresponding to the received video signals;
the secondary fusion module is used for acquiring a video playing integral image of the plurality of video players for displaying the primary video image, and performing secondary fusion of edge parameters according to the video playing integral image to obtain a secondary video image after secondary fusion;
and the video display module is used for correspondingly sending the signals corresponding to the secondary video images to a plurality of video players, and the plurality of video players display the secondary video images corresponding to the received video signals.
Further, the primary fusion module comprises:
the edge extraction module is used for extracting edge images at different positions twice for each video image block to obtain a primary edge image area and a secondary edge image area corresponding to the video image blocks;
and the primary fusion parameter processing module is used for sequentially carrying out geometric correction, color information unification and light leakage compensation processing on the edge image area according to the distribution relation of the distribution sequence of the video image blocks.
Further, the edge extraction module includes:
the primary edge area acquisition module is used for acquiring the side length information of the video image block, and acquiring the area width value of the primary edge image area by using the side length information through the following formula:
wherein D is1A region width value representing a primary edge image region; l denotes the length value of a video image block, D denotes the width value of a video image block, alpha1And alpha2The coefficients are adjusted for the width values, wherein,α1the value range of (A) is 0.33-0.57; alpha is alpha2The value range of (A) is 1.12-1.24;
the secondary level edge area acquisition module is used for acquiring the area width value of the secondary level edge image area by using the area width value of the edge image area and the following formula;
wherein D is2A region width value representing a secondary edge image region; beta is a1And beta2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.32-0.48; beta is a2The value range of (A) is 2.76-2.93;
and the edge image extraction module is used for intercepting the primary edge image area and the secondary edge image area according to the area width value of the primary edge image area and the area width value of the secondary edge image area.
Further, the secondary fusion module comprises:
the difference value determining module is used for determining the chromaticity difference and the brightness difference of a preset edge area between two adjacent video image blocks according to the acquired video playing whole image and the preset edge area of each display screen for playing the whole image on the video; the width value of the preset edge area is greater than the width value of the primary edge image area and is less than the sum of the width value of the primary edge image area and the width value of the secondary edge image area;
the second-level fusion parameter processing module I is used for directly carrying out secondary geometric correction, color information unification and light leakage compensation processing according to the first-level edge image area and the second-level edge image area when the chroma difference and the brightness difference do not exceed a difference threshold value or one index of the chroma difference and the brightness difference exceeds the difference threshold value;
the edge image area adjusting module is used for adjusting the width value of the primary edge image area and the width value of the secondary edge image area by combining a preset edge area when the chrominance difference and the luminance difference of the edge areas of the two video image blocks exceed a difference threshold, and acquiring the adjusted width value of the primary edge image area and the adjusted width value of the secondary edge image area;
and the second-level fusion parameter processing module is used for extracting the adjusted first-level edge image area and the second-level edge image area according to the width value of the adjusted first-level edge image area and the width value of the second-level edge image area, and performing secondary geometric correction, color information unification and light leakage compensation processing on the adjusted first-level edge image area and the adjusted second-level edge image area.
Further, the width value of the adjusted primary edge image area and the width value of the secondary edge image area are obtained through the following formulas:
wherein D ist1Representing the region width value of the adjusted primary edge image region; l represents the length value of the video image block, D represents the width value of the video image block, and D0 represents the width value of the preset edge area; alpha is alpha1And alpha2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.33-0.57; alpha is alpha2The value range of (A) is 1.12-1.24;
wherein D ist2Representing the region width value of the adjusted secondary edge image region; beta is a1And beta2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.32-0.48; beta is a2The value range of (A) is 2.76-2.93.
The invention has the beneficial effects that:
the edge fusion processing and control system and method for playing the video, which is proposed by the one of Peng and buy, can effectively improve the uniformity and consistency of image chromaticity and brightness at the spliced edge in the process of playing the video by a plurality of display screens by setting two edge areas, adjusting the edge areas and carrying out the parameter processing twice. Meanwhile, through setting two edge areas and corresponding edge area adjustment, a mode of carrying out color information unified processing on each display screen in the extending direction of the video image center through the video image edge areas is achieved, the chromaticity transition performance of the video display image corresponding to each display screen is improved, the chromaticity and the brightness of the video image which is kept uniform and consistent in the whole video image display process of each display screen are further improved, the visual comfort degree of a viewer is improved, and the problem that the chromaticity and the brightness of video playing are not uniform due to the fact that the chromaticity and the brightness of each display screen are too large is effectively avoided.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a system block diagram of the system of the present invention;
fig. 3 is a schematic view of the edge area according to the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
The invention provides an edge blending processing and control method for playing video, as shown in fig. 1, the method includes:
s1, receiving a video signal sent by a video acquisition end, and dividing a video image corresponding to the video signal into a plurality of video image blocks;
s2, performing primary edge parameter fusion processing on the video image blocks to obtain primary video images subjected to primary edge fusion processing;
s3, correspondingly sending the signals corresponding to the primary video images to a plurality of video players, wherein the plurality of video players display the primary video images corresponding to the received video signals;
s4, collecting a video playing whole image of the plurality of video players for displaying the primary video image, and carrying out secondary fusion of edge parameters according to the video playing whole image to obtain a secondary video image after secondary fusion;
s5, correspondingly sending the signals corresponding to the secondary video images to a plurality of video players, wherein the plurality of video players display the secondary video images corresponding to the received video signals.
The working principle of the technical scheme is as follows: firstly, receiving a video signal sent by a video acquisition end, and dividing a video image corresponding to the video signal into a plurality of video image blocks; then, primary edge parameter fusion processing is carried out on the video image blocks to obtain primary video images subjected to primary edge fusion processing; then, correspondingly sending the signals corresponding to the primary video images to a plurality of video players, wherein the plurality of video players display the primary video images corresponding to the received video signals; then, acquiring a video playing whole image of the plurality of video players for displaying the primary video image, and performing secondary fusion of edge parameters according to the video playing whole image to obtain a secondary video image after secondary fusion; and finally, correspondingly sending the signals corresponding to the secondary video images to a plurality of video players, wherein the plurality of video players display the secondary video images corresponding to the received video signals.
The effect of the above technical scheme is as follows: the uniformity and consistency of image chromaticity and brightness at the spliced edge of a plurality of display screens in the respective video playing process can be effectively improved by setting two edge areas, adjusting the edge areas and carrying out two times of fusion parameter processing. Meanwhile, through setting two edge areas and corresponding edge area adjustment, a mode of carrying out color information unified processing on each display screen in the extending direction of the video image center through the video image edge areas is achieved, the chromaticity transition performance of the video display image corresponding to each display screen is improved, the chromaticity and the brightness of the video image which is kept uniform and consistent in the whole video image display process of each display screen are further improved, the visual comfort degree of a viewer is improved, and the problem that the chromaticity and the brightness of video playing are not uniform due to the fact that the chromaticity and the brightness of each display screen are too large is effectively avoided.
In an embodiment of the present invention, performing primary edge parameter fusion processing on the video image block to obtain a primary video image after the primary edge fusion processing includes:
s201, performing two times of edge image extraction at different positions on each video image block to obtain a primary edge image area and a secondary edge image area corresponding to the video image blocks;
s202, according to the distribution relation of the distribution sequence of the video image blocks, carrying out geometric correction, color information unification and light leakage compensation treatment on the edge image area in sequence.
Performing two times of edge image extraction at different positions on each video image block to obtain a primary edge image area and a secondary edge image area corresponding to the video image blocks, including:
s2011, acquiring side length information of the video image block, and acquiring a region width value of a primary edge image region by using the side length information through the following formula:
wherein D is1A region width value representing a primary edge image region; l denotes the length value of a video image block, D denotes the width value of a video image block, alpha1And alpha2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.33-0.57; alpha is alpha2The value range of (A) is 1.12-1.24;
s2012, obtaining the area width value of the secondary edge image area by using the area width value of the edge image area and the area width value of the secondary edge image area through the following formula;
wherein D is2Representing secondary edge image regionsA zone width value of (a); beta is a1And beta2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.32-0.48; beta is a2The value range of (A) is 2.76-2.93;
s2013, intercepting the primary edge image area and the secondary edge image area according to the area width value of the primary edge image area and the area width value of the secondary edge image area.
The working principle of the technical scheme is as follows: firstly, performing two times of edge image extraction at different positions on each video image block to obtain a primary edge image area and a secondary edge image area corresponding to the video image blocks; and then, according to the distribution relation of the distribution sequence of the video image blocks, carrying out geometric correction, color information unification and light leakage compensation treatment on the edge image area in sequence.
Performing two times of edge image extraction at different positions on each video image block to obtain a primary edge image area and a secondary edge image area corresponding to the video image blocks, including:
firstly, acquiring side length information of the video image block, and acquiring a region width value of a primary edge image region by using the side length information; then, obtaining the area width value of the secondary edge image area by using the area width value of the edge image area; and finally, intercepting the primary edge image area and the secondary edge image area according to the area width value of the primary edge image area and the area width value of the secondary edge image area.
The effect of the above technical scheme is as follows: the uniformity and consistency of image chromaticity and brightness at the spliced edge of a plurality of display screens in the respective video playing process can be effectively improved by setting two edge areas, adjusting the edge areas and carrying out two times of fusion parameter processing. Meanwhile, through setting two edge areas and corresponding edge area adjustment, a mode of carrying out color information unified processing on each display screen in the extending direction of the video image center through the video image edge areas is achieved, the chromaticity transition performance of the video display image corresponding to each display screen is improved, the chromaticity and the brightness of the video image which is kept uniform and consistent in the whole video image display process of each display screen are further improved, the visual comfort degree of a viewer is improved, and the problem that the chromaticity and the brightness of video playing are not uniform due to the fact that the chromaticity and the brightness of each display screen are too large is effectively avoided.
On the other hand, the two edge region ranges are obtained through the formula, so that the obtained edge regions are suitable for the sizes of video display screens with various proportions, the uniformity and consistency of the chrominance and the brightness of the video display edges can be improved to the greatest extent by enabling the obtained edge regions to be in the video display screens with the corresponding proportion sizes, independent edge processing and edge region extraction do not need to be carried out aiming at different video display screen sizes, the use compatibility and the universality of the edge fusion processing and control method provided by the embodiment are effectively improved, and the video image edge fusion processing efficiency and the processing speed are effectively improved.
In an embodiment of the present invention, acquiring a video playing whole image of the plurality of video players showing the primary video image, and performing secondary fusion of edge parameters according to the video playing whole image to obtain a secondary video image after the secondary fusion, includes:
s401, according to an acquired video playing overall image, and according to a preset edge area of each display screen for playing the overall image on a video, determining a chromaticity difference and a brightness difference of the preset edge area between two adjacent video image blocks; the width value of the preset edge area is greater than the width value of the primary edge image area and is less than the sum of the width value of the primary edge image area and the width value of the secondary edge image area;
s402, if the chroma difference and the brightness difference do not exceed a difference threshold value, or one index of the chroma difference and the brightness difference exceeds the difference threshold value, performing secondary geometric correction, color information unification and light leakage compensation processing directly according to a primary edge image area and a secondary edge image area;
s403, when the chrominance difference and the luminance difference of the edge areas of the two video image blocks exceed a difference threshold, combining a preset edge area, adjusting the width value of the primary edge image area and the width value of the secondary edge image area, and acquiring the adjusted width value of the primary edge image area and the adjusted width value of the secondary edge image area;
s404, extracting the adjusted primary edge image area and the adjusted secondary edge image area according to the width value of the adjusted primary edge image area and the width value of the adjusted secondary edge image area, and performing secondary geometric correction, color information unification and light leakage compensation processing on the adjusted primary edge image area and the adjusted secondary edge image area.
The width value of the adjusted primary edge image area and the width value of the secondary edge image area are obtained through the following formulas:
wherein D ist1Representing the region width value of the adjusted primary edge image region; l represents the length value of the video image block, D represents the width value of the video image block, and D0 represents the width value of the preset edge area; alpha is alpha1And alpha2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.33-0.57; alpha is alpha2The value range of (A) is 1.12-1.24;
wherein D ist2Representing the region width value of the adjusted secondary edge image region; beta is a1And beta2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.32-0.48; beta is a2The value range of (A) is 2.76-2.93.
The working principle of the technical scheme is as follows: firstly, according to an acquired video playing whole image and a preset edge area of each display screen for playing the whole image on a video, determining the chromaticity difference and the brightness difference of the preset edge area between two adjacent video image blocks; the width value of the preset edge area is greater than the width value of the primary edge image area and is less than the sum of the width value of the primary edge image area and the width value of the secondary edge image area; then, if the chroma difference and the brightness difference do not exceed a difference threshold value, or one index of the chroma difference and the brightness difference exceeds the difference threshold value, secondary geometric correction, color information unification and light leakage compensation processing are directly carried out according to a primary edge image area and a secondary edge image area; then, when the chrominance difference and the luminance difference of the edge areas of the two video image blocks exceed a difference threshold, combining a preset edge area, adjusting the width value of the primary edge image area and the width value of the secondary edge image area, and acquiring the adjusted width value of the primary edge image area and the adjusted width value of the secondary edge image area; and finally, extracting the adjusted primary edge image area and the adjusted secondary edge image area according to the width value of the adjusted primary edge image area and the width value of the adjusted secondary edge image area, and performing secondary geometric correction, color information unification and light leakage compensation on the adjusted primary edge image area and the adjusted secondary edge image area.
The effect of the above technical scheme is as follows: through setting up two marginal areas and the marginal area adjustment that corresponds, can every display screen carry out the mode that color information unifies the processing to the video image center extension direction through video image marginal area, improve the chroma transition nature of the video display image that every display screen corresponds, and then improve each display screen and keep even unanimous video picture chroma and luminance in the whole video image of demonstration in-process, improve viewer's visual comfort level, effectively avoid the video broadcast chroma that each display screen colour difference and luminance difference too big cause and luminance uneven problem takes place.
On the other hand, the two edge region ranges are obtained through the formula to be adjusted, so that the quality of secondary edge fusion parameter processing of the adjusted edge region can be effectively improved, and the uniformity and consistency of edge color information after the secondary edge fusion processing are improved. Meanwhile, the video image subjected to parameter fusion processing by utilizing the adjusted edge area can meet the sufficient color and brightness transition of the video edge extending to the video center aiming at each display screen on the premise of improving the uniformity and consistency of the color and brightness of the video edge, and the problem of uneven color and brightness distribution of the edge and the inner area in one video display screen is effectively prevented.
An embodiment of the present invention provides an edge blending processing and control system for playing a video, as shown in fig. 2, the system includes:
the segmentation module is used for receiving a video signal sent by a video acquisition end and segmenting a video image corresponding to the video signal into a plurality of video image blocks;
the primary fusion module is used for performing primary edge parameter fusion processing on the video image blocks to obtain primary video images subjected to primary edge fusion processing;
the video playing module is used for correspondingly sending the signals corresponding to the primary video images to a plurality of video players, and the plurality of video players display the primary video images corresponding to the received video signals;
the secondary fusion module is used for acquiring a video playing integral image of the plurality of video players for displaying the primary video image, and performing secondary fusion of edge parameters according to the video playing integral image to obtain a secondary video image after secondary fusion;
and the video display module is used for correspondingly sending the signals corresponding to the secondary video images to a plurality of video players, and the plurality of video players display the secondary video images corresponding to the received video signals.
The working principle of the technical scheme is as follows:
firstly, a video signal sent by a video acquisition end is received through a segmentation module, and a video image corresponding to the video signal is segmented into a plurality of video image blocks; then, a primary fusion module is used for carrying out primary edge parameter fusion processing on the video image blocks to obtain primary video images subjected to primary edge fusion processing; then, a video playing module is adopted to correspondingly send signals corresponding to the primary video images to a plurality of video players, and the plurality of video players display the primary video images corresponding to the received video signals; then, acquiring a video playing integral image of the plurality of video players for displaying the primary video image through a secondary fusion module, and carrying out secondary fusion on edge parameters according to the video playing integral image to obtain a secondary video image after secondary fusion; and finally, correspondingly sending the signals corresponding to the secondary video images to a plurality of video players by adopting a video display module, wherein the plurality of video players display the secondary video images corresponding to the received video signals.
The effect of the above technical scheme is as follows: the uniformity and consistency of image chromaticity and brightness at the spliced edge of a plurality of display screens in the respective video playing process can be effectively improved by setting two edge areas, adjusting the edge areas and carrying out two times of fusion parameter processing. Meanwhile, through setting two edge areas and corresponding edge area adjustment, a mode of carrying out color information unified processing on each display screen in the extending direction of the video image center through the video image edge areas is achieved, the chromaticity transition performance of the video display image corresponding to each display screen is improved, the chromaticity and the brightness of the video image which is kept uniform and consistent in the whole video image display process of each display screen are further improved, the visual comfort degree of a viewer is improved, and the problem that the chromaticity and the brightness of video playing are not uniform due to the fact that the chromaticity and the brightness of each display screen are too large is effectively avoided.
In one embodiment of the invention, the primary fusion module comprises:
the edge extraction module is used for extracting edge images at different positions twice for each video image block to obtain a primary edge image area and a secondary edge image area corresponding to the video image blocks;
and the primary fusion parameter processing module is used for sequentially carrying out geometric correction, color information unification and light leakage compensation processing on the edge image area according to the distribution relation of the distribution sequence of the video image blocks.
Wherein the edge extraction module comprises:
the primary edge area acquisition module is used for acquiring the side length information of the video image block, and acquiring the area width value of the primary edge image area by using the side length information through the following formula:
wherein D is1A region width value representing a primary edge image region; l denotes the length value of a video image block, D denotes the width value of a video image block, alpha1And alpha2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.33-0.57; alpha is alpha2The value range of (A) is 1.12-1.24;
the secondary level edge area acquisition module is used for acquiring the area width value of the secondary level edge image area by using the area width value of the edge image area and the following formula;
wherein D is2A region width value representing a secondary edge image region; beta is a1And beta2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.32-0.48; beta is a2The value range of (A) is 2.76-2.93;
and the edge image extraction module is used for intercepting the primary edge image area and the secondary edge image area according to the area width value of the primary edge image area and the area width value of the secondary edge image area.
The working principle of the technical scheme is as follows: firstly, performing edge image extraction at different positions twice on each video image block through an edge extraction module to obtain a primary edge image area and a secondary edge image area corresponding to the video image blocks; and then, sequentially performing geometric correction, color information unification and light leakage compensation processing on the edge image area according to the distribution relation of the distribution sequence of the video image blocks by using a primary fusion parameter processing module.
Wherein, the operation process of the edge extraction module comprises the following steps:
firstly, a primary edge area acquisition module is adopted to acquire side length information of the video image block, and the side length information is utilized to acquire the area width value of a primary edge image area through the following formula: then, acquiring a region width value of a secondary edge image region by using the region width value of the edge image region through a secondary edge region acquisition module; and finally, intercepting the primary edge image area and the secondary edge image area by adopting an edge image extraction module according to the area width value of the primary edge image area and the area width value of the secondary edge image area.
The effect of the above technical scheme is as follows: the uniformity and consistency of image chromaticity and brightness at the spliced edge of a plurality of display screens in the respective video playing process can be effectively improved by setting two edge areas, adjusting the edge areas and carrying out two times of fusion parameter processing. Meanwhile, through setting two edge areas and corresponding edge area adjustment, a mode of carrying out color information unified processing on each display screen in the extending direction of the video image center through the video image edge areas is achieved, the chromaticity transition performance of the video display image corresponding to each display screen is improved, the chromaticity and the brightness of the video image which is kept uniform and consistent in the whole video image display process of each display screen are further improved, the visual comfort degree of a viewer is improved, and the problem that the chromaticity and the brightness of video playing are not uniform due to the fact that the chromaticity and the brightness of each display screen are too large is effectively avoided.
In one embodiment of the present invention, the secondary fusion module includes:
the difference value determining module is used for determining the chromaticity difference and the brightness difference of a preset edge area between two adjacent video image blocks according to the acquired video playing whole image and the preset edge area of each display screen for playing the whole image on the video; the width value of the preset edge area is greater than the width value of the primary edge image area and is less than the sum of the width value of the primary edge image area and the width value of the secondary edge image area;
the second-level fusion parameter processing module I is used for directly carrying out secondary geometric correction, color information unification and light leakage compensation processing according to the first-level edge image area and the second-level edge image area when the chroma difference and the brightness difference do not exceed a difference threshold value or one index of the chroma difference and the brightness difference exceeds the difference threshold value;
the edge image area adjusting module is used for adjusting the width value of the primary edge image area and the width value of the secondary edge image area by combining a preset edge area when the chrominance difference and the luminance difference of the edge areas of the two video image blocks exceed a difference threshold, and acquiring the adjusted width value of the primary edge image area and the adjusted width value of the secondary edge image area;
and the second-level fusion parameter processing module is used for extracting the adjusted first-level edge image area and the second-level edge image area according to the width value of the adjusted first-level edge image area and the width value of the second-level edge image area, and performing secondary geometric correction, color information unification and light leakage compensation processing on the adjusted first-level edge image area and the adjusted second-level edge image area.
The width value of the adjusted primary edge image area and the width value of the secondary edge image area are obtained through the following formulas:
wherein D ist1Representing the region width value of the adjusted primary edge image region; l represents the length value of the video image block, D represents the width value of the video image block, and D0 represents the width value of the preset edge area; alpha is alpha1And alpha2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.33-0.57; alpha is alpha2The value range of (A) is 1.12-1.24;
wherein D ist2Representing the region width value of the adjusted secondary edge image region; beta is a1And beta2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.32-0.48; beta is a2The value range of (A) is 2.76-2.93.
The working principle of the technical scheme is as follows: firstly, determining the chromaticity difference and the brightness difference of a preset edge area between two adjacent video image blocks according to a collected video playing whole image and the preset edge area of each display screen for playing the whole image on a video through a difference value determining module; the width value of the preset edge area is greater than the width value of the primary edge image area and is less than the sum of the width value of the primary edge image area and the width value of the secondary edge image area; then, when the chroma difference and the brightness difference do not exceed a difference threshold value or one index of the chroma difference and the brightness difference exceeds the difference threshold value, a secondary geometric correction, color information unification and light leakage compensation processing are directly carried out according to a primary edge image area and a secondary edge image area by using a secondary fusion parameter processing module I; then, when the chrominance difference and the luminance difference of the edge areas of the two video image blocks exceed a difference threshold value through an edge image area adjusting module, combining a preset edge area, adjusting the width value of the primary edge image area and the width value of the secondary edge image area, and obtaining the adjusted width value of the primary edge image area and the adjusted width value of the secondary edge image area; and finally, extracting the adjusted primary edge image area and the adjusted secondary edge image area according to the width value of the adjusted primary edge image area and the width value of the adjusted secondary edge image area through a secondary fusion parameter processing module II, and performing secondary geometric correction, color information unification and light leakage compensation processing on the adjusted primary edge image area and the adjusted secondary edge image area.
The effect of the above technical scheme is as follows: through setting up two marginal areas and the marginal area adjustment that corresponds, can every display screen carry out the mode that color information unifies the processing to the video image center extension direction through video image marginal area, improve the chroma transition nature of the video display image that every display screen corresponds, and then improve each display screen and keep even unanimous video picture chroma and luminance in the whole video image of demonstration in-process, improve viewer's visual comfort level, effectively avoid the video broadcast chroma that each display screen colour difference and luminance difference too big cause and luminance uneven problem takes place.
On the other hand, the two edge region ranges are obtained through the formula to be adjusted, so that the quality of secondary edge fusion parameter processing of the adjusted edge region can be effectively improved, and the uniformity and consistency of edge color information after the secondary edge fusion processing are improved. Meanwhile, the video image subjected to parameter fusion processing by utilizing the adjusted edge area can meet the sufficient color and brightness transition of the video edge extending to the video center aiming at each display screen on the premise of improving the uniformity and consistency of the color and brightness of the video edge, and the problem of uneven color and brightness distribution of the edge and the inner area in one video display screen is effectively prevented.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. An edge blending processing and control method for playing video, the method comprising:
receiving a video signal sent by a video acquisition end, and dividing a video image corresponding to the video signal into a plurality of video image blocks;
performing primary edge parameter fusion processing on the video image blocks to obtain primary video images subjected to primary edge fusion processing;
correspondingly sending the signals corresponding to the primary video images to a plurality of video players, wherein the plurality of video players display the primary video images corresponding to the received video signals;
acquiring a video playing integral image of the plurality of video players for displaying the primary video image, and performing secondary fusion of edge parameters according to the video playing integral image to obtain a secondary video image after secondary fusion;
and correspondingly sending the signals corresponding to the secondary video images to a plurality of video players, wherein the plurality of video players display the secondary video images corresponding to the received video signals.
2. The method according to claim 1, wherein performing primary edge parameter fusion processing on the video image blocks to obtain a primary video image after primary edge fusion processing comprises:
performing edge image extraction at different positions twice on each video image block to obtain a primary edge image area and a secondary edge image area corresponding to the video image blocks;
and according to the distribution relation of the distribution sequence of the video image blocks, carrying out geometric correction, color information unification and light leakage compensation treatment on the edge image area in sequence.
3. The method of claim 2, wherein performing two edge image extractions at different positions on each of the video image blocks to obtain a primary edge image area and a secondary edge image area corresponding to the video image blocks comprises:
acquiring side length information of the video image block, and acquiring a region width value of a primary edge image region by using the side length information through the following formula:
wherein D is1A region width value representing a primary edge image region; l represents a video graphLength value of block, D represents width value of video image block, alpha1And alpha2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.33-0.57; alpha is alpha2The value range of (A) is 1.12-1.24;
acquiring the area width value of the secondary edge image area by using the area width value of the edge image area and the area width value of the edge image area through the following formula;
wherein D is2A region width value representing a secondary edge image region; beta is a1And beta2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.32-0.48; beta is a2The value range of (A) is 2.76-2.93;
and intercepting the primary edge image area and the secondary edge image area according to the area width value of the primary edge image area and the area width value of the secondary edge image area.
4. The method according to claim 1, wherein the step of acquiring a video playing whole image of the plurality of video players showing the primary video image and performing secondary fusion of edge parameters according to the video playing whole image to obtain a secondary video image after the secondary fusion comprises:
determining the chromaticity difference and the brightness difference of a preset edge area between two adjacent video image blocks according to the collected video playing integral image and the preset edge area of each display screen aiming at the video playing integral image; the width value of the preset edge area is greater than the width value of the primary edge image area and is less than the sum of the width value of the primary edge image area and the width value of the secondary edge image area;
if the chroma difference and the brightness difference do not exceed the difference threshold value, or one index of the chroma difference and the brightness difference exceeds the difference threshold value, secondary geometric correction, color information unification and light leakage compensation processing are directly carried out according to the primary edge image area and the secondary edge image area;
when the chrominance difference and the luminance difference of the edge areas of the two video image blocks exceed a difference threshold, the width value of the primary edge image area and the width value of the secondary edge image area are adjusted by combining a preset edge area, and the adjusted width value of the primary edge image area and the adjusted width value of the secondary edge image area are obtained;
and extracting the adjusted primary edge image area and the adjusted secondary edge image area according to the width value of the adjusted primary edge image area and the width value of the adjusted secondary edge image area, and performing secondary geometric correction, color information unification and light leakage compensation on the adjusted primary edge image area and the adjusted secondary edge image area.
5. The method of claim 1, wherein the adjusted width value of the primary edge image region and the adjusted width value of the secondary edge image region are obtained by the following formulas:
wherein D ist1Representing the region width value of the adjusted primary edge image region; l represents the length value of the video image block, D represents the width value of the video image block, and D0 represents the width value of the preset edge area; alpha is alpha1And alpha2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.33-0.57; alpha is alpha2The value range of (A) is 1.12-1.24;
wherein D ist2Representing the region width value of the adjusted secondary edge image region; beta is a1And beta2Adjusting the coefficient for the width value, wherein1Value ofIn the range of 0.32 to 0.48; beta is a2The value range of (A) is 2.76-2.93.
6. An edge blending processing and control system for playing video, the system comprising:
the segmentation module is used for receiving a video signal sent by a video acquisition end and segmenting a video image corresponding to the video signal into a plurality of video image blocks;
the primary fusion module is used for performing primary edge parameter fusion processing on the video image blocks to obtain primary video images subjected to primary edge fusion processing;
the video playing module is used for correspondingly sending the signals corresponding to the primary video images to a plurality of video players, and the plurality of video players display the primary video images corresponding to the received video signals;
the secondary fusion module is used for acquiring a video playing integral image of the plurality of video players for displaying the primary video image, and performing secondary fusion of edge parameters according to the video playing integral image to obtain a secondary video image after secondary fusion;
and the video display module is used for correspondingly sending the signals corresponding to the secondary video images to a plurality of video players, and the plurality of video players display the secondary video images corresponding to the received video signals.
7. The method of claim 1, wherein the primary fusion module comprises:
the edge extraction module is used for extracting edge images at different positions twice for each video image block to obtain a primary edge image area and a secondary edge image area corresponding to the video image blocks;
and the primary fusion parameter processing module is used for sequentially carrying out geometric correction, color information unification and light leakage compensation processing on the edge image area according to the distribution relation of the distribution sequence of the video image blocks.
8. The method of claim 2, wherein the edge extraction module comprises:
the primary edge area acquisition module is used for acquiring the side length information of the video image block, and acquiring the area width value of the primary edge image area by using the side length information through the following formula:
wherein D is1A region width value representing a primary edge image region; l denotes the length value of a video image block, D denotes the width value of a video image block, alpha1And alpha2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.33-0.57; alpha is alpha2The value range of (A) is 1.12-1.24;
the secondary level edge area acquisition module is used for acquiring the area width value of the secondary level edge image area by using the area width value of the edge image area and the following formula;
wherein D is2A region width value representing a secondary edge image region; beta is a1And beta2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.32-0.48; beta is a2The value range of (A) is 2.76-2.93;
and the edge image extraction module is used for intercepting the primary edge image area and the secondary edge image area according to the area width value of the primary edge image area and the area width value of the secondary edge image area.
9. The method of claim 1, wherein the secondary fusion module comprises:
the difference value determining module is used for determining the chromaticity difference and the brightness difference of a preset edge area between two adjacent video image blocks according to the acquired video playing whole image and the preset edge area of each display screen for playing the whole image on the video; the width value of the preset edge area is greater than the width value of the primary edge image area and is less than the sum of the width value of the primary edge image area and the width value of the secondary edge image area;
the second-level fusion parameter processing module I is used for directly carrying out secondary geometric correction, color information unification and light leakage compensation processing according to the first-level edge image area and the second-level edge image area when the chroma difference and the brightness difference do not exceed a difference threshold value or one index of the chroma difference and the brightness difference exceeds the difference threshold value;
the edge image area adjusting module is used for adjusting the width value of the primary edge image area and the width value of the secondary edge image area by combining a preset edge area when the chrominance difference and the luminance difference of the edge areas of the two video image blocks exceed a difference threshold, and acquiring the adjusted width value of the primary edge image area and the adjusted width value of the secondary edge image area;
and the second-level fusion parameter processing module is used for extracting the adjusted first-level edge image area and the second-level edge image area according to the width value of the adjusted first-level edge image area and the width value of the second-level edge image area, and performing secondary geometric correction, color information unification and light leakage compensation processing on the adjusted first-level edge image area and the adjusted second-level edge image area.
10. The method of claim 1, wherein the adjusted width value of the primary edge image region and the adjusted width value of the secondary edge image region are obtained by the following formulas:
wherein D ist1Representing adjusted primary edge image regionsA zone width value of (a); l represents the length value of the video image block, D represents the width value of the video image block, and D0 represents the width value of the preset edge area; alpha is alpha1And alpha2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.33-0.57; alpha is alpha2The value range of (A) is 1.12-1.24;
wherein D ist2Representing the region width value of the adjusted secondary edge image region; beta is a1And beta2Adjusting the coefficient for the width value, wherein1The value range of (A) is 0.32-0.48; beta is a2The value range of (A) is 2.76-2.93.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011455035.6A CN112581416B (en) | 2020-12-10 | 2020-12-10 | Edge fusion processing and control system and method for playing video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011455035.6A CN112581416B (en) | 2020-12-10 | 2020-12-10 | Edge fusion processing and control system and method for playing video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112581416A true CN112581416A (en) | 2021-03-30 |
CN112581416B CN112581416B (en) | 2021-08-20 |
Family
ID=75131470
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011455035.6A Active CN112581416B (en) | 2020-12-10 | 2020-12-10 | Edge fusion processing and control system and method for playing video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112581416B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117097017A (en) * | 2023-08-09 | 2023-11-21 | 盐城工学院 | New energy bidirectional charging station with remote monitoring function |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6813391B1 (en) * | 2000-07-07 | 2004-11-02 | Microsoft Corp. | System and method for exposure compensation |
CN103714525A (en) * | 2013-12-24 | 2014-04-09 | 北京淳中视讯科技有限公司 | Integration band adjusting method for image integration processing, adjusting device and integration band adjusting system for image integration processing |
CN104486659A (en) * | 2014-12-05 | 2015-04-01 | 华东师范大学 | Edge blending processing and control system and edge blending processing and control method for playing videos |
US20150170342A1 (en) * | 2013-09-05 | 2015-06-18 | Arecont Vision,LLC. | System and method for spatio video image enhancement |
CN107507155A (en) * | 2017-09-25 | 2017-12-22 | 北京奇虎科技有限公司 | Video segmentation result edge optimization real-time processing method, device and computing device |
CN107684721A (en) * | 2017-09-01 | 2018-02-13 | 北京乐动卓越科技有限公司 | A kind of Method of Creation Process and editor system of super large map scene |
US9948869B2 (en) * | 2016-07-04 | 2018-04-17 | Yuan-Ting Fang | Image fusion method for multiple lenses and device thereof |
CN112492284A (en) * | 2020-11-23 | 2021-03-12 | 广州励丰文化科技股份有限公司 | Edge fusion method and device based on multiple projectors and electronic equipment |
-
2020
- 2020-12-10 CN CN202011455035.6A patent/CN112581416B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6813391B1 (en) * | 2000-07-07 | 2004-11-02 | Microsoft Corp. | System and method for exposure compensation |
US20150170342A1 (en) * | 2013-09-05 | 2015-06-18 | Arecont Vision,LLC. | System and method for spatio video image enhancement |
CN103714525A (en) * | 2013-12-24 | 2014-04-09 | 北京淳中视讯科技有限公司 | Integration band adjusting method for image integration processing, adjusting device and integration band adjusting system for image integration processing |
CN104486659A (en) * | 2014-12-05 | 2015-04-01 | 华东师范大学 | Edge blending processing and control system and edge blending processing and control method for playing videos |
US9948869B2 (en) * | 2016-07-04 | 2018-04-17 | Yuan-Ting Fang | Image fusion method for multiple lenses and device thereof |
CN107684721A (en) * | 2017-09-01 | 2018-02-13 | 北京乐动卓越科技有限公司 | A kind of Method of Creation Process and editor system of super large map scene |
CN107507155A (en) * | 2017-09-25 | 2017-12-22 | 北京奇虎科技有限公司 | Video segmentation result edge optimization real-time processing method, device and computing device |
CN112492284A (en) * | 2020-11-23 | 2021-03-12 | 广州励丰文化科技股份有限公司 | Edge fusion method and device based on multiple projectors and electronic equipment |
Non-Patent Citations (2)
Title |
---|
MARIUS PEDERSEN 等: "Seam-Based Edge Blending for Multi-Projection Systems", 《INTERNATIONAL JOURNAL OF SIGNAL PROCESSING, IMAGE PROCESSING AND PATTERN RECOGNITION》 * |
宋子健: "多通道投影立体拼接显示墙的关键技术研究", 《中国体视学与图像分析》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117097017A (en) * | 2023-08-09 | 2023-11-21 | 盐城工学院 | New energy bidirectional charging station with remote monitoring function |
CN117097017B (en) * | 2023-08-09 | 2024-04-05 | 盐城工学院 | A new energy bidirectional charging station with remote monitoring function |
Also Published As
Publication number | Publication date |
---|---|
CN112581416B (en) | 2021-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101692326B (en) | System and method for on-site point-by-point calibration of brightness and chrominance of the whole screen of LED display screen | |
DE102011009111B4 (en) | Multi-screen display device | |
CN102075688B (en) | Wide dynamic processing method for single-frame double-exposure image | |
US9369636B2 (en) | Video signal processing method and camera device | |
CA3001430C (en) | Image processing method and device for led display screen | |
EP2025176A1 (en) | Converting a colorimetric transform from an input color space to an output color space | |
CN201868072U (en) | System for correcting brightness and chroma dot by dot on site for whole LED (Light-Emitting Diode) display screen | |
CN106506950A (en) | A kind of image processing method and device | |
CN102611828A (en) | Real-time enhanced processing system for foggy continuous video image | |
CN114866809B (en) | Video conversion method, apparatus, device, storage medium, and program product | |
CN107342054B (en) | Display device, display control method and display control unit | |
CN110120207A (en) | Splicing large screen chroma-luminance consistency automatic adjustment system and bearing calibration | |
CN102801899A (en) | Method and device for improving image display quality of spliced screen | |
CN110349097B (en) | Color enhancement method for image significance and image processing device | |
CN112581416B (en) | Edge fusion processing and control system and method for playing video | |
CN102426828A (en) | Screen edge color adjusting method and device | |
CN105306852A (en) | Multi-projector stitching fusion method for high-quality visual effect | |
CN113611237A (en) | Method and system for adjusting MINI LED backlight display picture | |
CN109144448A (en) | A kind of image display method and its device of splicing large screen | |
US8958640B1 (en) | Image color cast correction using groups of pixels | |
CN105788499A (en) | LED display screen point-to-point brightness and chroma correction method based on constant-temperature CCD camera | |
CN108377369A (en) | The Binning methods of Bayer format coloured image | |
CN110490838A (en) | The method and device of display panel different resolution zone boundary processing | |
CN107277475A (en) | Laser television image processing method, laser television and computer-readable recording medium | |
CN107578737B (en) | Backlight partition color desaturation optimization method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 518000 Room 201, building 4, software industry base, No. 19, 17 and 18, Haitian 1st Road, Binhai community, Yuehai street, Nanshan District, Shenzhen, Guangdong Patentee after: Shenzhen qidebao Technology Co.,Ltd. Address before: 518000 1705, satellite building, 61 Gaoxin South 9th Road, Gaoxin high tech Zone community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province Patentee before: Shenzhen Puhui Zhilian Technology Co.,Ltd. |
|
CP03 | Change of name, title or address |