WO2023039753A1 - 一种背光显示的控制方法及装置 - Google Patents
一种背光显示的控制方法及装置 Download PDFInfo
- Publication number
- WO2023039753A1 WO2023039753A1 PCT/CN2021/118554 CN2021118554W WO2023039753A1 WO 2023039753 A1 WO2023039753 A1 WO 2023039753A1 CN 2021118554 W CN2021118554 W CN 2021118554W WO 2023039753 A1 WO2023039753 A1 WO 2023039753A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- backlight
- value
- power value
- area
- salient
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000009826 distribution Methods 0.000 claims abstract description 64
- 230000004927 fusion Effects 0.000 claims abstract description 20
- 230000001965 increasing effect Effects 0.000 claims abstract description 13
- 230000003247 decreasing effect Effects 0.000 claims abstract description 4
- 238000001514 detection method Methods 0.000 claims description 71
- 238000004422 calculation algorithm Methods 0.000 claims description 44
- 238000013473 artificial intelligence Methods 0.000 claims description 26
- 230000007423 decrease Effects 0.000 claims description 12
- 238000003860 storage Methods 0.000 claims description 10
- 230000005540 biological transmission Effects 0.000 claims description 9
- 238000012805 post-processing Methods 0.000 claims description 9
- 230000002829 reductive effect Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 4
- 230000009467 reduction Effects 0.000 claims description 4
- 230000008447 perception Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 238000005192 partition Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000003313 weakening effect Effects 0.000 description 2
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/10—Intensity circuits
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/34—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/34—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
- G09G3/3406—Control of illumination source
- G09G3/342—Control of illumination source using several illumination sources separately controlled corresponding to different display panel areas, e.g. along one dimension such as lines
- G09G3/3426—Control of illumination source using several illumination sources separately controlled corresponding to different display panel areas, e.g. along one dimension such as lines the different display panel areas being distributed in two dimensions, e.g. matrix
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/02—Recognising information on displays, dials, clocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0613—The adjustment depending on the type of the information to be displayed
- G09G2320/062—Adjustment of illumination source parameters
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/066—Adjustment of display parameters for control of contrast
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0686—Adjustment of display parameters with two or more screen areas displaying information with different brightness or colours
Definitions
- the embodiments of the present application relate to the field of artificial intelligence (AI), and in particular to a method and device for controlling a backlight display.
- AI artificial intelligence
- a traditional liquid crystal display In a traditional liquid crystal display (LCD), the liquid crystal molecules cannot emit light independently, and a backlight source is required to see the content displayed on the LCD panel.
- the principle of LCD screen display is: when the LED backlight is turned on, the transmitted luminous flux of the red, green, and blue color components is controlled and adjusted by the RGB attribute value of each pixel on the screen, and then a colorful picture is displayed on the screen by using the principle of three-primary color synthesis. . Since the brightness of the backlight cannot be adjusted during this process, the traditional LCD screen display has the following disadvantages: the power value is large, the image contrast is small; and there is inevitably light leakage.
- the existing technology introduces a local dimming algorithm.
- This technology is mainly used to control the brightness and darkness of the backlight of the display, and then through pixel compensation to make the display image undistorted compared with that before adjustment, so as to improve the dynamic contrast of the image while The purpose of reducing the power value to save energy.
- Local dimming algorithm divided from the backlight dimension includes: 0-dimensional backlight adjustment (0D dimming), which is uniform dimming (uniform dimming); 1-dimensional backlight adjustment (1D dimming), which is line dimming (line dimming); 2-dimensional backlight adjustment (2D dimming), for local dimming (local dimming).
- 0D Dimming is basically a specification that every monitor will have.
- the local dimming algorithm mainly refers to the image content corresponding to the current partition of the display screen to obtain the backlight power value to control the brightness of the backlight, and the brightness of the backlight is proportional to the power value.
- the higher the overall brightness of the screen the higher the backlight power value, and the required power The value is also higher.
- the backlight power value is higher than the rated power value of the backlight module, it is necessary to reduce the backlight brightness, and reducing the overall backlight brightness may affect the image effect; when the backlight power value is lower than the rated power value of the module, some remaining power values are not use efficiently. Therefore, how to adjust the brightness of the backlight without weakening the image effect, and how to reduce or increase the power value of the backlight in individual areas, so as to improve the contrast and the sense of light are problems that need to be improved and solved urgently.
- embodiments of the present application provide a method and device for controlling a backlight display.
- an embodiment of the present application provides a method for controlling backlight display, the method comprising: obtaining a fused image according to an input image; identifying a salient area and a non-salient area in the fused image; improving the the backlight power value of the salient area; and/or reduce the backlight power value of the non-salient area.
- the obtaining the fused image according to the input image includes: detecting the input image according to a traditional saliency detection algorithm to obtain a first detection result; detecting the fused image according to an artificial intelligence (AI) information recognition algorithm. Obtaining a second detection result from the input image; obtaining the fused image according to the first detection result and the second detection result.
- AI artificial intelligence
- the detecting the input image according to the traditional saliency detection algorithm to obtain the first detection result includes: detecting the low-level prior information in the input image according to the traditional saliency detection algorithm and/or or high-level prior information, and output the first detection result; the low-level prior information includes at least one of contrast prior information or spatial position prior information; the high-level prior information includes face, text, object At least one of the . In this way, it is possible to identify the content of interest to the human eye in the input image.
- the detecting the input image according to the AI information recognition algorithm to obtain the second detection result includes: detecting a salient target in the input image according to the AI information recognition algorithm, and converting the segmenting the salient object, retaining edge information of the salient object, and obtaining the second detection result. In this way, the region where the salient object in the input image is located can be identified.
- the obtaining the fused image according to the first detection result and the second detection result includes: determining a first weight value of a high-brightness pixel in the first detection result; determining the the second weight value of the high-brightness pixel in the second detection result; adjust the weight value of the high-brightness pixel according to the first weight value and the second weight value to obtain the fused image.
- the weights of the high-brightness pixels obtained by the two detection methods can be fused, and the brightness processing of the subject area and the background area can be performed according to the weight fusion, and a fused image with clear edges and can reflect the characteristics of the image content can be obtained.
- the identifying the salient area and the non-salient area in the fusion image includes: determining the brightness distribution of the input image; adjusting the brightness distribution on the fusion image according to the brightness distribution.
- the first salient value of each pixel is obtained to obtain the second salient value;
- the block salient value of the set area is obtained by calculating the average value after summing the second salient values of a plurality of pixels in the set area on the fusion image;
- the quantity of the set area is multiple; the salient area in the fused image is determined according to the block saliency value of the set area; the non-salient area is determined according to the saliency area in the fused image.
- the saliency distribution of the fused image can be adjusted according to the brightness distribution of the input image to obtain block saliency information, and the saliency area and non-saliency area in the fused image can be separated to facilitate the control of backlight power value and brightness distribution.
- the adjusting the first salient value of each pixel on the fused image according to the brightness distribution of the input image to obtain the second salient value includes: determining the brightness of the input image distribution; according to the weighted average of the brightness distribution and the preset brightness-weight curve, an adjustment curve of significance gain-brightness distribution is obtained; the brightness distribution includes weight values of low, medium and high brightness; according to the significance
- the adjustment curve of the gain-brightness distribution obtains the salience gain value of each pixel; the salience gain value is used to increase or decrease the salience value of each pixel; the salience value of each pixel is relative to a single pixel point
- the saliency degree of the entire image increasing or decreasing the first saliency value of each pixel on the fused image according to the saliency gain value to obtain the second saliency value of each pixel. In this way, the brightness distribution on the fused image can be adjusted to obtain the second salient value of each pixel.
- the increasing the backlight power value of the salient area includes: determining a first backlight power value according to the input image; when the first backlight power value is less than the rated total backlight power In the case of a value, the remaining power value is added to the significance area; the remaining power value is the difference between the first backlight power value and the rated backlight total power value. In this way, the remaining power value can be allocated to multiple salient regions in proportion to the salient value of the block, and the backlight power value of this region can be increased to improve contrast and light perception.
- the reducing the backlight power value of the inconspicuous area includes: determining a first backlight power value according to the input image; when the first backlight power value is greater than the rated backlight total In the case of the power value, reduce the backlight power value of the inconspicuous area, and maintain the backlight power value of the salient area. In this way, the contrast and the sense of illumination can be improved without reducing the brightness of the salient area.
- the determining the first backlight power value according to the input image includes determining the first backlight power value according to a sum of current backlight power values of the input image. In this way, the pre-output backlight power value can be determined.
- the reducing the backlight power value of the inconspicuous area includes: obtaining a brightness gain value according to the average brightness value of the input image and a first preset curve; the first preset The curve is a luminance-brightness gain value adjustment curve; the saliency gain value is obtained according to the block saliency value of the input image and a second preset curve; the second preset curve is a saliency value-salience value gain value adjustment curve; according to The luminance gain value and the significance gain value determine the backlight drop intensity; the backlight adjustment value is obtained according to the current backlight power value and the third preset curve in the inconspicuous area; the third preset curve is the backlight power value-backlight gain value adjustment curve; reduce the backlight power value of the insignificant area according to the backlight adjustment value and the backlight drop intensity.
- the backlight drop intensity can be obtained by calculating the luminance gain value and the salience gain value, and the backlight power value of the non-salience area is reduced
- the reducing the backlight power value of the inconspicuous area includes: determining a second backlight power value; under the condition that the second backlight power value is greater than the rated backlight power value, according to the The ratio of the second backlight power value to the rated backlight power value reduces the backlight power value of the inconspicuous area.
- the current can be partially reduced to drive the backlight, keeping the brightness of the conspicuous area of the screen unchanged, while the brightness of the non-conspicuous area decreases and the contrast ratio increases.
- an embodiment of the present application provides a backlight display control device, including: an image post-processing module, configured to obtain a fused image according to an input image; identify a salient area and a non-salient area in the fused image; A power value control module, configured to increase the backlight power value of the prominent area; and/or decrease the backlight power value of the non-prominent area.
- the image post-processing module includes: an area identification unit, configured to determine the brightness distribution of the input image; adjust the first salient value of each pixel on the fusion image according to the brightness distribution , to obtain the second salient value; calculate the average value after summing the second salient values of multiple pixels in the set area on the fusion image to obtain the block salient value of the set area; the set area is the backlight mode
- Each backlight unit in the group corresponds to multiple areas; determine the salient area in the fused image according to the block saliency value of the set area; determine the saliency area in the fused image according to the saliency in the fused image Region identifies the non-salient region.
- the power value control module includes: a first backlight statistics unit, configured to obtain a first backlight power value according to the brightness distribution of an input image; a first control unit, configured to When the power value is less than the rated backlight total power value, increase the remaining power value to the significant area; the remaining power value is the difference between the first backlight power value and the rated backlight total power value a second control unit, configured to reduce the backlight power value of the inconspicuous area and maintain the backlight power value of the salient area when the first backlight power value is greater than the rated backlight total power value.
- the power value control module further includes: a second backlight statistical unit, configured to determine a second backlight power value; a third control unit, configured to determine when the second backlight power value is greater than the Under the condition of the rated backlight power value, the backlight power value of the inconspicuous area is reduced according to the ratio of the second backlight power value to the rated backlight power value.
- an embodiment of the present application provides an electronic device, including: at least one memory for storing programs; at least one processor for executing the programs stored in the memory; and a backlight module for point Bright backlight, the backlight module is connected to the processor through a transmission interface, the backlight module includes a plurality of backlight units; when the program stored in the memory is executed, the processor is used to perform the According to any one of the methods, the plurality of backlight units light up the backlights in the corresponding areas.
- the embodiments of the present application provide a computer storage medium, and instructions are stored in the computer storage medium, and when the instructions are run on the computer, the computer executes the method as described in any one of the first aspect .
- the embodiments of the present application provide a computer program product including instructions, which, when the instructions are run on a computer, cause the computer to execute the method as described in any one of the first aspect.
- an embodiment of the present application provides a backlight display control device, including: a processor and a transmission interface; the processor receives or sends data through the transmission interface; the processor is configured to call the Program instructions in the memory, so that the control device executes the method according to any one of the first aspect.
- the embodiment of the present application provides a display control method combined with AI information to identify the area of interest of the human eye, and dynamically adjust the backlight distribution within the rated range of the backlight power value; statistically average the pixel-level significant values corresponding to the backlight partition to obtain the corresponding Block-level salience value; when the pre-output backlight total power value is less than the rated backlight total power value, the remaining power value is allocated to the salience area according to the image saliency information to increase the backlight brightness. When the pre-output backlight power value is greater than the rated backlight power value, the brightness of the non-distinctive area is reduced while maintaining the performance of the salient area.
- FIG. 1 is a system architecture diagram of a backlight display control method proposed in an embodiment of the present application
- FIG. 2 is a flow chart of a backlight display control method proposed in an embodiment of the present application
- FIG. 3 is a flow chart of image fusion processing of a traditional saliency detection algorithm and an AI information recognition algorithm in a backlight display control method proposed in an embodiment of the present application;
- FIG. 4A is a schematic diagram of an input image in a backlight display control method proposed in an embodiment of the present application.
- FIG. 4B is a schematic diagram of the first detection result obtained after detecting the input image shown in FIG. 4A by using a traditional saliency detection algorithm
- FIG. 4C is a schematic diagram of the second detection result obtained after detecting the input image shown in FIG. 4A by using the AI information recognition algorithm;
- FIG. 4D is a schematic diagram of a fusion image obtained after fusion of the first detection result and the second detection result
- Fig. 5 is a histogram describing brightness distribution
- FIG. 6 is a tiling diagram of three preset brightness weight distribution curves
- Fig. 7 is a schematic diagram of a significant value adjustment curve after the weighted average of the three preset weight distribution curves shown in Fig. 6;
- FIG. 8 is a flow chart of dynamically adjusting the backlight power value of each backlight unit according to the block salience value in a backlight display control method proposed by an embodiment of the present application;
- FIG. 9A is a schematic diagram of a first preset curve in a backlight display control method proposed by an embodiment of the present application.
- FIG. 9B is a schematic diagram of a second preset curve in a backlight display control method proposed by an embodiment of the present application.
- FIG. 9C is a schematic diagram of a third preset curve in a backlight display control method proposed by an embodiment of the present application.
- At least one (item) means one or more, and “multiple” means two or more.
- “And/or” is used to describe the association relationship of associated objects, indicating that there can be three types of relationships, for example, “A and/or B” can mean: only A exists, only B exists, and A and B exist at the same time , where A and B can be singular or plural.
- the character “/” generally indicates that the contextual objects are an “or” relationship.
- At least one of the following” or similar expressions refer to any combination of these items, including any combination of single or plural items.
- At least one item (piece) of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c ", where a, b, c can be single or multiple.
- the first solution is a method of backlight adjustment. Firstly, the position of the main object in the image is detected to determine the main area, and then the tone curve of the picture is adjusted to make the main area brighter or the background area opposite to it darker. Adaptive dynamic contrast enhancement is performed on the main body area and the background area, and the fusion of the last two areas is completed. This technique accentuates the main areas of the frame. Since the main body area and the background area are processed separately, contour ripples (contour) are likely to appear at the edge after the fusion of the two areas, and at the same time, flickering problems are prone to appear in the time domain.
- contour ripples contour
- the second solution is a method of backlight adjustment based on power value control.
- the backlight system When powering on, the backlight system is initialized, the brightness signal of the image of each partition in the screen is collected, and the pre-drive current value and the total pre-output power are calculated according to the brightness signal of the image. value, and compare the pre-output total power value with the target power value.
- the target area When the total pre-output power value is less than the target power value, the target area is illuminated with the pre-drive current; when the total pre-output power value is greater than the target power value, the ratio is obtained by comparing the target power value with the total pre-output
- the drive current value is multiplied by the calculated ratio to reduce the pre-drive current to light up the target area.
- This scheme adopts the power value control method, and controls the size of the driving current through the comparison between the pre-output total power value and the target total power value, and the control method has nothing to do with the image content characteristics.
- the total power value of the pre-output is lower than the target total power value, and the backlight will be driven with a high current, resulting in a brighter overall picture and a decrease in contrast.
- the total power value of the pre-output is higher than the target total power value, and the current driving backlight will be reduced globally, resulting in a decrease in overall screen brightness and a corresponding decrease in contrast.
- This scheme ignores the characteristics of the image content and only increases or decreases the brightness of the backlight globally.
- Neither of the above two schemes considered how to dynamically allocate the backlight power value according to the area of interest of the human eye in the screen.
- the backlight power value is lower than the rated power value, how to use the remaining power value under the power value constraint to increase the backlight of individual areas Power values that enhance the contrast and light perception of individual areas to improve image quality.
- the embodiment of the present application proposes a backlight display control method, which controls the backlight power value and brightness distribution according to the saliency information, identifies the area of interest to the human eye in the image through artificial intelligence (AI) information, and dynamically allocates the backlight power value , to maintain or improve the contrast of the human eye's interest area, and realize the improvement of the display image quality under the same power value constraint.
- AI artificial intelligence
- control method of the backlight display that the embodiment of the present application proposes is applicable to the system on chip (system on chip, SOC) of TV product chip and the time control (time control, TCON) module of PC display chip, key control function is realized by hardware now, simultaneously Software implementation is also supported.
- system on chip system on chip, SOC
- time control time control, TCON
- FIG. 1 is a system architecture diagram of a backlight display control method proposed by an embodiment of the present application.
- the system architecture includes an image post-processing module 11 , a backlight statistics module 12 , a power value control module 13 and a backlight module 14 .
- the image post-processing module 11 obtains the input image, processes the input image with the traditional saliency detection algorithm and the AI information recognition algorithm to obtain the fusion image; obtains and calculates the brightness of each backlight in the backlight module 14 according to the brightness distribution of the input image and the fusion image.
- the block saliency value information corresponding to the unit; the block saliency value is used to indicate at least one salient region of the input image.
- the backlight module 14 includes a plurality of backlight units.
- the backlight statistical module 12 counts the pre-output total backlight power value of the backlight module 14 according to the input image, and records the pre-output total backlight power value as the first backlight power value.
- the power value control module 13 adaptively allocates the backlight power corresponding to at least one salient area according to the comparison result between the first backlight power value and the rated total backlight power value. For example, when the pre-output backlight total power value is less than the rated backlight total power value, the remaining power value is increased to at least one significant area, and the pre-output backlight power value of the adjusted backlight module 14 is obtained; When the total power value is greater than the total rated backlight power value, reduce the backlight power value in the non-conspicuous area, and maintain the backlight power value in the salient area.
- the adjusted pre-output backlight power value of the backlight module 14 is recorded as the second backlight power value; the remaining power value is the difference between the first backlight power value and the rated backlight total power value.
- the backlight module 14 turns on the backlight corresponding to each backlight unit according to the second backlight power value.
- a backlight display control method proposed in the embodiment of the present application obtains a fused image according to an input image; identifies a salient area and a non-salient area in the fused image; increases the backlight power value of the salient area; and /or reduce the backlight power value for non-salient areas.
- FIG. 2 is a flow chart of a backlight display control method proposed by an embodiment of the present application. As shown in FIG. 2 , the flow of the backlight display control method proposed by the embodiment of the present application is as follows.
- S11 acquire the input image, and process the input image by fusion of the traditional saliency detection algorithm and the AI information recognition algorithm to obtain the fused image; obtain the block saliency corresponding to each backlight unit in the backlight module according to the brightness distribution of the input image and the fused image value; block saliency values corresponding to each region are used to indicate salient and/or non-salient regions of the input image.
- the first backlight power value may be obtained by counting the pre-output backlight total power value of the backlight module 14 according to the input image.
- step S13 if the first backlight power value is less than the rated backlight total power value, the remaining power value is increased to at least one significant area to obtain the second backlight power value pre-output by the backlight module, the remaining power value is the difference between the first backlight power value and the rated backlight total power value; when the first backlight power value is greater than the rated backlight total power value, the backlight power value of at least one inconspicuous area can also be reduced to keep at least The backlight power value of a significant area is used to obtain the second backlight power value pre-outputted by the backlight module.
- step S14 the backlight module turns on the backlight corresponding to each backlight unit according to the second backlight power value.
- a backlight display control method proposed in the embodiment of this application effectively identifies the area of interest of the human eye based on AI detection information, and combines traditional saliency detection technology to adaptively allocate backlight power values to adjust the pre-output backlight power value of the backlight module.
- the pre-output backlight power value is less than the rated backlight power value, the image quality performance can be improved by enhancing the contrast of the salient area; when the pre-output backlight power value is greater than the rated backlight power value, the brightness of the non-conspicuous area can be maintained by reducing the brightness The performance of the sensitive area minimizes the loss of image quality perceivable by the human eye.
- the backlight module can increase the contrast of the region of interest and improve the image quality experience under the constraints of the power value.
- the backlight display control method proposed in the embodiment of the present application combines the power value to dynamically adjust the backlight performance without side effects such as contour line ripple.
- a backlight display control method proposed in the embodiment of the present application uses AI technology to identify salient areas of the image, and adaptively allocates backlight power values for the salient areas, which can not only meet the power value constraints, but also can Effectively improve the quality performance of the salient area and reflect the content characteristics of the target image.
- FIG. 3 is a flow chart of image fusion processing with a traditional saliency detection algorithm and an AI information recognition algorithm in a backlight display control method proposed by an embodiment of the present application.
- step S11 in a method for controlling backlight display proposed by an embodiment of the present application may perform image processing through steps S21-S28 to obtain block salient value information.
- Traditional saliency detection algorithms are used to predict eye-fixation, that is, to detect the most interesting area of human eyes during a period of time when they focus on a natural scene image.
- an input image is obtained, and a traditional saliency detection algorithm is used to process the input image to obtain a corresponding traditional saliency image.
- the traditional saliency image can be recorded as the first detection result.
- the input image detected by the traditional saliency detection algorithm model is shown in Figure 4A
- the obtained traditional saliency image is shown in Figure 4B
- the traditional saliency image has the low-level features of the input image, such as color, intensity and direction etc.
- the traditional saliency detection algorithm can also obtain the traditional saliency image according to the low-level prior information such as the contrast and spatial position of the input image combined with the high-level prior information such as face and text.
- the low-level prior information includes contrast prior information, center prior information and background prior information.
- a pixel or region of the input image has features such as brightness and color that are significantly different from other regions, then the pixel or region has a high probability of being a salient region. Then the salient features such as brightness and color of the pixel or region compared with other regions are the contrast prior information, and the weight value of the contrast prior information can be increased in the traditional saliency detection algorithm model.
- the area close to the center of the input image has a high probability of being a salient area
- the central feature of the input image is the center prior information, which can be improved in the traditional saliency detection algorithm model.
- the weight value of contrast prior information is the weight value of contrast prior information.
- the background area of the input image is usually connected to the boundary, and the pixel area connected to the image boundary has a high probability of being the background area.
- the feature of the background area is background prior information, which can be improved in the traditional saliency detection algorithm model.
- the weight value of contrast prior information is usually used to improve the image quality of the background area.
- High-level prior information is used to extract some high-level target information to assist saliency detection based on computer vision technology.
- High-level target information includes face, target boundary, text and other information.
- the salient target image is the result detected by the AI information recognition algorithm, which can more accurately describe the outline of the target object.
- the AI information recognition algorithm can find the target that the human eye is most interested in in a picture, accurately segment the target, and retain the edge information of the target.
- the input image can be input into the AI information recognition algorithm model, the salient target in the input image is detected, the salient target is extracted, the edge information of the salient target is retained, and the salient target image is output.
- the AI saliency detection algorithm model is used to process the input image as shown in FIG. 4A, and the output saliency target image is as shown in FIG. 4C.
- the AI-based salient target detection algorithm reduces the probability of incomplete targets and the probability of mislabeling the background as a salient area, while retaining the edge information of the salient target in the original image.
- the salient target image is the result detected by the AI-based saliency detection algorithm, which can more accurately describe the saliency The outline of the target object.
- the first weight value of the high brightness pixel can be obtained by counting the ratio value of the high brightness pixel in the first detection result; the second weight value of the high brightness pixel can be obtained by counting the ratio value of the high brightness pixel in the second detection result Value; where the ratio of high-brightness pixels is the proportion of high-brightness pixels in the total pixels.
- the second weight value of the highlight pixel in the salient target image is adjusted according to the first weight value of the high brightness pixel in the traditional saliency image to obtain a fused image, and the fused image is recorded as the third Salient image.
- the proportion of highlighted pixels in the salient target image can be adjusted according to the traditional salient image, and the adjusted value of the proportion of highlighted pixels can be used as a weight to fuse the first detection result and the second detection result , to get the third saliency image.
- the saliency target image shown is obtained to obtain the third saliency image as shown in FIG. 4D.
- the third saliency image combines the saliency region of the traditional saliency image and the saliency region of the saliency target image, and has the low-level features of the target object in the input image, such as color, intensity and direction, and more accurate target objects Outline.
- the RGB luminance values of each pixel on the input image may be counted to obtain the luminance distribution.
- RGB is currently a commonly used way of expressing color information. It uses the brightness of the three primary colors of red (R), green (G), and blue (B) to quantitatively express colors. Normally, R, G, and B each have 256 levels of brightness, expressed as numbers from 0, 1, 2... until 255.
- the YUV luminance value of each pixel on the input image may also be counted to obtain the luminance distribution.
- YUV is a color coding method adopted by the European television system, where "Y" represents the brightness value (luminance or luma), that is, the gray value; “U” and “V” represent the chroma (chrominance or chroma), used to describe the image color and saturation. Since the luminance value Y and chrominance values U and V are separated when YUV color coding is adopted, a complete image can be displayed without UV information according to the luminance information (Y). The image is black and white, so each pixel on the input image can be counted The Y brightness value of the point to obtain the brightness distribution.
- the brightness value Y is represented numerically from 0, 1, 2... up to 255.
- the brightness distribution may be described by a histogram.
- Figure 5 is a histogram describing the brightness distribution. As shown in Figure 5, the value on the Y axis of the abscissa indicates the brightness value, and the value on the COUNT axis of the ordinate indicates the number of pixels corresponding to a certain brightness value. For example, Figure 5 indicates The number of pixels with a brightness value of 25 on the input image is 100, and the number of pixels with a brightness value of 75 is 80.
- the brightness distribution includes weight values for low, medium, and high brightness.
- the weight values of low, medium and high brightness are respectively calculated according to the brightness distribution of the input image.
- the weight values of low, medium and high brightness indicate the proportion of light and dark distribution of this input image.
- the distribution of brightness weights (low, medium, high) in an all-black image is (100,0,0), indicating that the low-brightness pixels in the image account for 100% of the screen; the low, medium, and The weight distribution of high brightness is (0, 0, 100) indicating that the high brightness pixels in the image account for 100% of the screen.
- the numerical values therein are examples only, and do not limit the range.
- pixels with YUV luminance value Y below 50 can be classified as low luminance
- pixels between 50-200 can be classified as medium luminance
- pixels above 200 can be classified as high luminance
- the maximum luminance value is 255.
- the numerical values therein are examples only, and do not limit the range.
- the weighted values of low, medium, and high luminance are weighted averaged with the corresponding preset three weight distribution curves to obtain a saliency value-brightness distribution adjustment curve that is finally adapted to the input image.
- Figure 6 is a tiling diagram of the preset three weight distribution curves; as shown in Figure 6, the preset low, mid, hig weight distribution curves are tiled and set in the same coordinate system, and the abscissa Y indicates the brightness value of the tile , the ordinate gain indicates the gain strength adjusted by the significant value, min is the minimum gain strength, and max is the maximum gain strength; the three brightness weight values of low, medium and high are respectively matched with the preset three corresponding low, medium and high brightness weight distribution curves Do weighted average, such as:
- LUT[Y] is the adjustment curve of salience value-luminance distribution
- W0 is the weight value of low brightness
- W1 is the weight value of medium brightness
- W2 is the weight value of high brightness.
- the adjustment curve LUT[Y] finally adapted to the salient value-brightness distribution of the input image is obtained.
- the luminance weight distribution (low, medium, high) of an all-black image is (100, 0, 0), indicating that the image proportion of the low-brightness scene in the image is 100%, which are respectively preset as shown in FIG. 6
- the weighted average of the three weight distribution curves is used to obtain the adjustment curve LUT[Y] as shown in Figure 7.
- the numerical values therein are examples only, and do not limit the range.
- the saliency gain value is used to increase or decrease the saliency value corresponding to each pixel.
- the saliency value refers to the degree of salience of a single pixel relative to the entire image.
- the adjustment curve LUT[Y] may be searched according to the brightness value of each pixel of the input image to obtain the saliency gain value corresponding to the brightness value of each pixel.
- the saliency value of a black pixel on a completely black image is 0; if one of the pixels is white, the saliency value of this pixel is 100.
- the numerical values therein are examples only, and do not limit the range.
- the degree of significance of a single pixel relative to the entire image can be calculated according to the fused image, and the significance value of each pixel of the image can be obtained.
- Record this saliency value as the first saliency value ; look up the LUT[Y] on the adjustment curve according to the brightness value of each pixel of the input image to obtain the saliency gain value of each pixel, and adjust the fused image according to this gain value
- the first saliency value of the corresponding pixel is obtained to obtain the second saliency value.
- the first saliency value of the corresponding pixel on the third saliency image can be adjusted according to the saliency gain value to obtain the second saliency value, and the complete graph can be traversed to obtain each The second saliency value of the point of pixels.
- the block saliency value of each set area can be obtained by calculating one by one according to the partition structure of the backlight module and the second saliency value of each pixel on the fused image.
- the partition structure of the backlight module 14 includes a plurality of backlight units, and each backlight unit corresponds to a plurality of pixels on the image.
- the second saliency value of each pixel of a plurality of pixels in a set area can be summed to calculate an average value to obtain a block saliency value of a set area; traversing the backlight module to obtain The block salience value for each backlight unit corresponding area.
- an area with a block saliency value below 100 may be marked as an insignificant area
- an area with a block saliency value above 100 may be marked as a significant area.
- the numerical values therein are examples only, and do not limit the range.
- step S12-step S14 are aimed at
- the backlight power value is adaptively allocated to the salient area, which not only meets the power value constraints, but also effectively improves the image quality performance of the salient area, reflecting the content characteristics of the target image. Step S12-Step S14 will be discussed in detail below.
- step S12 to count the first backlight power value pre-outputted by the backlight module according to the input image, and this step may include the following steps S301-S303.
- Step S301 initialize the backlight module 14 when powered on, and acquire an input image.
- Step S302 collect the luminance value of each pixel on the input image, and calculate the total luminance value of each pixel in the corresponding area of each backlight unit according to the partition situation of the backlight module 14, and obtain the backlight power value corresponding to each backlight unit .
- the partition structure of the backlight module 14 includes a plurality of backlight units, and each backlight unit includes a plurality of pixels in a corresponding area of the input image, and one backlight unit corresponds to a plurality of pixels in the area.
- the luminance values are summed to obtain the total luminance value of the backlight unit.
- Step S33 counting the backlight power values of each backlight unit to obtain the total pre-output backlight power value of the backlight module 14 , and recording the pre-output total backlight power value as the first backlight power value.
- FIG. 8 is a flow chart of dynamically adjusting the backlight power value of each backlight unit according to the block salience value in a backlight display control method proposed by an embodiment of the present application. As shown in FIG. 8, the process of step S13 includes the following steps S401-S407.
- the block saliency value of the first backlight unit is 5
- the block saliency value of the second backlight unit is 8
- the block saliency value of the third backlight unit is 8.
- the backlight drop intensity of each backlight unit can be obtained by calculating the block salience value and the average brightness value of each backlight unit, which can be realized through the following steps:
- the first preset curve is a brightness-brightness gain adjustment curve.
- the second preset curve is a saliency-salience gain adjustment curve.
- step S406 includes:
- the backlight of the corresponding backlight unit of the backlight module is turned on according to the second backlight power value.
- a backlight display control method proposed in the embodiment of this application effectively identifies the area of interest of the human eye based on AI detection information, and combines traditional saliency detection technology to adaptively allocate backlight power values to adjust the pre-output backlight power value of the backlight module.
- the pre-output backlight power value is less than the rated backlight power value, the image quality performance can be improved by enhancing the contrast of the salient area; when the pre-output backlight power value is greater than the rated backlight power value, the brightness of the non-conspicuous area can be maintained by reducing the brightness The performance of the sensitive area minimizes the loss of image quality perceivable by the human eye.
- the backlight module can increase the contrast of the region of interest under the constraint of the power value, and improve the image quality experience.
- an embodiment of the present application provides a backlight display control device, including: an image post-processing module, configured to obtain a fused image according to an input image; and identify salient regions and insignificant regions in the fused image ;
- the power value control module is used to increase the backlight power value of the salient area; and/or reduce the backlight power value of the non-salient area.
- the image post-processing module includes: an area identification unit, which is used to determine the brightness distribution of the input image; adjust the first salient value of each pixel on the fused image according to the brightness distribution to obtain the second salient value; set the region according to the fused image The second salient values of multiple pixels in the interior are summed to calculate the average value to obtain the block salient value of the set area; the set area is the corresponding area of each backlight unit in the backlight module, and the number is multiple; according to the set area Determine the salient area in the fused image by the salient value of the block; determine the non-salient area according to the salient area in the fused image.
- the power value control module includes: a first backlight statistical unit, which is used to obtain the first backlight power value according to the brightness distribution of the input image; a first control unit, which is used when the first backlight power value is less than the rated backlight total power value Next, increase the remaining power value to the significant area; the remaining power value is the difference between the first backlight power value and the rated backlight total power value; the second control unit is used for when the first backlight power value is greater than the rated backlight total power value In the case of the power value, reduce the backlight power value of the inconspicuous area, and maintain the backlight power value of the salient area.
- the power value control module further includes: a second backlight statistics unit, used to determine the second backlight power value; a third control unit, used to determine the power value of the second backlight according to the second backlight power value under the condition that the second backlight power value is greater than the rated backlight power value.
- the ratio of the power value to the rated backlight power value reduces the backlight power value for non-salient areas.
- an embodiment of the present application provides an electronic device, including: at least one memory for storing programs; at least one processor for executing the programs stored in the memory; and a backlight module for Turn on the backlight, the backlight module is connected to the processor through the transmission interface, the backlight module includes a plurality of backlight units; when the program stored in the memory is executed, the processor is used to perform the method according to any one of the first aspect, so that a plurality of The backlight unit turns on the backlight of the corresponding area.
- the embodiments of the present application provide a computer storage medium, in which instructions are stored, and when the instructions are run on the computer, the computer is made to execute the method according to any one of the first aspect.
- the embodiments of the present application provide a computer program product including instructions, and when the instructions are run on a computer, the computer is made to execute the method according to any one of the first aspect.
- the embodiments of the present application provide a backlight display control device, including: a processor and a transmission interface; the processor receives or sends data through the transmission interface; the processor is configured To call the program instructions stored in the memory, so that the control device executes the method according to any one of the first aspect.
- various aspects or features of the embodiments of the present application may be implemented as methods, apparatuses, or articles of manufacture using standard programming and/or engineering techniques.
- article of manufacture covers a computer program accessible from any computer readable device, carrier or media.
- computer-readable media may include, but are not limited to: magnetic storage devices (e.g., hard disks, floppy disks, or tapes, etc.), optical disks (e.g., compact discs (compact disc, CD), digital versatile discs (digital versatile disc, DVD) etc.), smart cards and flash memory devices (for example, erasable programmable read-only memory (EPROM), card, stick or key drive, etc.).
- magnetic storage devices e.g., hard disks, floppy disks, or tapes, etc.
- optical disks e.g., compact discs (compact disc, CD), digital versatile discs (digital versatile disc, DVD) etc.
- smart cards and flash memory devices for example, erasable programmable read-only memory (EPROM
- various storage media described herein can represent one or more devices and/or other machine-readable media for storing information.
- the term "machine-readable medium” may include, but is not limited to, wireless channels and various other media capable of storing, containing and/or carrying instructions and/or data.
- sequence numbers of the above-mentioned processes do not mean the order of execution, and the order of execution of the processes should be determined by their functions and internal logic, and should not The implementation process of the embodiment of the present application constitutes no limitation.
- the disclosed systems, devices and methods may be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented.
- the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
- a unit described as a separate component may or may not be physically separated, and a component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- the functions are realized in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
- the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or an access network device, etc.) execute all or part of the steps of the methods in the embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Liquid Crystal Display Device Control (AREA)
Abstract
一种背光显示的控制方法和装置,方法包括:根据输入图像获得融合图像;识别融合图像中的显著性区域和非显著性区域;提高显著性区域的背光功率值;和/或降低非显著性区域的背光功率值。以此,可以通过识别图像中人眼感兴趣区域,根据显著性信息进行背光功率值和亮度分布控制,动态分配背光功率值,保持或提高人眼感兴趣区域的对比度,实现在相同功率值约束下显示图像质量的提高;在不减弱图像效果的情况下调整背光亮度,降低或增大个别区域的背光功率值,以提高对比度和光照感。
Description
本申请实施例涉及人工智能(artificial intelligence,AI)领域,尤其涉及一种背光显示的控制方法及设备。
传统的液晶显示器(liquid crystal display,LCD)中液晶分子无法自主发光,需要背光源才能看到LCD面板上所显示的内容。LCD屏显示原理为:在LED背光灯打开情况下,通过屏幕各个像素点的RGB属性值控制调节红、绿、蓝颜色分量的透射光通量,进而利用三基色合成原理在屏幕上显示五彩斑斓的画面。由于该过程中背光灯的亮度无法调整,导致传统的LCD屏显示存在以下不足之处:功率值较大,图像对比度较小;不可避免地存在漏光现象。
现有技术引入局部背光调节(local dimming)算法,此技术主要用于控制显示器的背光源的亮暗,再通过像素补偿使显示画面与调节前相比不失真,达到提升画面的动态对比度的同时降低功率值节能的目的。Local dimming算法从背光维度划分包括:0维背光调节(0D dimming),为均匀调光(uniform dimming);1维背光调节(1D dimming),为按线调光(line dimming);2维背光调节(2D dimming),为局部调光(local dimming)。0D Dimming基本是每款显示器都会有的规格,主要用于通过国家能效测试,达到降功率值的目的,但0D Dimming通常不能提升画面的对比度,对显示画面的质量改善不大。2D Dimming通常需要提供背光恒流板来控制2D局部背光,这样会增加额外的成本。
Local dimming算法主要参考显示画面当前分区对应的图像内容得到背光功率值来控制背光灯的亮度,而背光灯亮度与功率值成正比,通常画面整体亮度越高,背光功率值越高,所需功率值也越高。当背光功率值高于背光模组额定功率值时,需要降低背光亮度,而降低整体背光亮度可能会影响图像效果;当背光功率值低于模组额定功率值时,有部分剩余功率值未被有效利用。因此,如何在不减弱图像效果的情况下调整背光亮度,降低或增大个别区域的背光功率值,以提高对比度和光照感是亟需改进和解决的问题。
发明内容
为了解决上述的问题,本申请的实施例提供了一种背光显示控制的方法和装置。
第一方面,本申请的实施例提供了一种背光显示的控制方法,所述方法包括,根据输入图像获得融合图像;识别所述融合图像中的显著性区域和非显著性区域;提高所述显著性区域的背光功率值;和/或降低所述非显著性区域的背光功率值。以此,可以通过识别图像中人眼感兴趣区域,根据显著性信息进行背光功率值和亮度分布控制,动态分配背光功率值,保持或提高人眼感兴趣区域的对比度,实现在相同功率值约束下显示图像质量的提 高;在不减弱图像效果的情况下调整背光亮度,降低或增大个别区域的背光功率值,以提高对比度和光照感。
作为一种可行的实施方式,所述根据输入图像获得融合图像,包括:根据传统显著性检测算法检测所述输入图像获得第一检测结果;根据人工智能(artificial intelligence,AI)信息识别算法检测所述输入图像获得第二检测结果;根据所述第一检测结果和所述第二检测结果获得所述融合图像。以此,可以通过人工智能信息识别图像中人眼感兴趣区域,融合传统显著性检测算法调整图像中的高亮像素的权重和对比度,得到融合图像。
作为一种可行的实施方式,所述根据传统显著性检测算法检测所述输入图像获得第一检测结果,包括:根据所述传统显著性检测算法检测所述输入图像中的低层先验信息和/或高层先验信息,输出所述第一检测结果;所述低层先验信息包括对比度先验信息或空间位置先验信息中的至少一项;所述高层先验信息包括人脸、文字、物体中的至少一项。以此,可以识别输入图像中人眼感兴趣的内容。
作为一种可行的实施方式,所述根据所述AI信息识别算法检测所述输入图像获得第二检测结果,包括:根据所述AI信息识别算法检测所述输入图像中的显著性目标,将所述显著性目标分割,保留所述显著性目标的边缘信息,获得所述第二检测结果。以此,可以识别输入图像中的显著性目标所在的区域。
作为一种可行的实施方式,所述根据所述第一检测结果和所述第二检测结果获得所述融合图像,包括:确定所述第一检测结果中高亮度像素的第一权重值;确定所述第二检测结果中高亮度像素的第二权重值;根据所述第一权重值和所述第二权重值调整高亮度像素的权重值,得到所述融合图像。以此,可以将两种检测方法得到的高亮度像素的权重融合,根据权重融合对主体区域和背景区域进行亮度处理,得到边缘清晰且能够体现图像内容特性的融合图像。
作为一种可行的实施方式,所述识别所述融合图像中的显著性区域和非显著性区域,包括:确定所述输入图像的亮度分布;根据所述亮度分布调整所述融合图像上所述每个像素的第一显著值,得到第二显著值;根据所述融合图像上设定区域内多个像素的第二显著值加和后计算平均值获得所述设定区域的块显著值;所述设定区域的数量为多个;根据所述设定区域的块显著值确定所述融合图像中的显著性区域;根据所述融合图像中的显著性区域确定非显著性区域。以此,可以根据输入图像的亮度分布调整融合图像的显著值分布得到获得块显著值信息,将融合图像中的显著性区域和非显著性区域分割出来,便于进行背光功率值和亮度分布控制。
作为一种可行的实施方式,所述根据所述输入图像的亮度分布调整所述融合图像上所述每个像素的第一显著值,得到第二显著值,包括:确定所述输入图像的亮度分布;根据所述亮度分布与预设的亮度-权重曲线做加权平均,得到显著性增益-亮度分布的调整曲线;所述亮度分布包括低、中、高亮度的权重值;根据所述显著性增益-亮度分布的调整曲线得到每个像素的显著性增益值;所述显著性增益值用于增大或减小每个像素的显著值;所述每个像素的显著值为单个像素点相对整幅图像的显著性程度;根据所述显著性增益值增大或减小所述融合图像上所述每个像素的第一显著值,得到所述每个像素的第二显著值。以此,可以调整融合图像上的亮度分布获得每个像素的第二显著值。
作为一种可行的实施方式,所述提高所述显著性区域的背光功率值,包括:根据所述输入图像确定第一背光功率值;在所述第一背光功率值小于所述额定背光总功率值的情况下,将剩余功率值增加至所述显著性区域;所述剩余功率值为所述第一背光功率值与所述额定背光总功率值之间的差值。以此,可以将剩余功率值按块显著值的比例分配至多个显著性区域,增大该区域的背光功率值,以提高对比度和光照感。
作为一种可行的实施方式,所述降低所述非显著性区域的背光功率值,包括:根据所述输入图像确定第一背光功率值;在所述第一背光功率值大于所述额定背光总功率值的情况下,降低所述非显著性区域的背光功率值,保持所述显著性区域的背光功率值。以此,可以在不减弱显著性区域的亮度条件下提高对比度和光照感。
作为一种可行的实施方式,所述根据所述输入图像确定第一背光功率值,包括,根据所述输入图像的当前背光功率值求和确定第一背光功率值。以此,可以确定预输出的背光功率值。
作为一种可行的实施方式,所述降低所述非显著性区域的背光功率值,包括:根据所述输入图像的平均亮度值和第一预设曲线得到亮度增益值;所述第一预设曲线为亮度-亮度增益值调整曲线;根据所述输入图像的块显著值和第二预设曲线得到显著性增益值;所述第二预设曲线为显著值-显著值增益值调整曲线;根据所述亮度增益值和所述显著性增益值确定背光下降强度;根据所述非显著性区域当前的背光功率值和第三预设曲线得到背光调整值;所述第三预设曲线为背光功率值-背光增益值调整曲线;根据所述背光调整值和背光下降强度降低所述非显著性区域的背光功率值。以此,可以通过亮度增益值和显著值增益值计算获得背光下降强度,根据背光下降强度减小所述非显著性区域的背光功率值。
作为一种可行的实施方式,所述降低所述非显著性区域的背光功率值,包括:确定第二背光功率值;在所述第二背光功率值大于额定背光功率值的条件下,根据所述第二背光功率值与所述额定背光功率值的比值降低所述非显著性区域背光功率值。以此,可以对于超出额定总功率值的情况下,局部降低电流驱动背光,保持画面显著性区域亮度不变,而非显著性区域亮度下降,对比度增加。
第二方面,本申请的实施例提供了一种背光显示控制装置,包括:图像后处理模块,用于根据输入图像获得融合图像;识别所述融合图像中的显著性区域和非显著性区域;功率值控制模块,用于提高所述显著性区域的背光功率值;和/或降低所述非显著性区域的背光功率值。
作为一种可行的实施方式,所述图像后处理模块包括:区域识别单元,用于确定所述输入图像的亮度分布;根据所述亮度分布调整所述融合图像上每个像素的第一显著值,得到第二显著值;根据所述融合图像上设定区域内多个像素的第二显著值加和后计算平均值获得所述设定区域的块显著值;所述设定区域为背光模组中的每个背光单元相对应区域,数量为多个;根据所述设定区域的块显著值确定所述融合图像中的所述显著性区域;根据所述融合图像中的所述显著性区域确定所述非显著性区域。
作为一种可行的实施方式,所述功率值控制模块包括:第一背光统计单元,用于根据输入图像的亮度分布获得第一背光功率值;第一控制单元,用于在所述第一背光功率值小于额定背光总功率值的情况下,将剩余功率值增加至所述显著性区域;所述剩余功率值为所述第一背光功率值与所述额定背光总功率值之间的差值;第二控制单元,用于在所述第 一背光功率值大于额定背光总功率值的情况下,降低所述非显著性区域的背光功率值,保持所述显著性区域的背光功率值。
作为一种可行的实施方式,所述功率值控制模块还包括:第二背光统计单元,用于确定第二背光功率值;第三控制单元,用于在所述第二背光功率值大于所述额定背光功率值的条件下,根据所述第二背光功率值与所述额定背光功率值的比值降低所述非显著性区域背光功率值。
第三方面,本申请的实施例提供了一种电子设备,包括:至少一个存储器,用于存储程序;至少一个处理器,用于执行所述存储器存储的程序;和背光模组,用于点亮背光,所述背光模组通过传输接口与所述处理器连接,所述背光模组包括多个背光单元;当所述存储器存储的程序被执行时,所述处理器用于执行如第一方面任一项所述的方法,使得所述多个背光单元点亮对应区域的背光。
第四方面,本申请的实施例提供了一种计算机存储介质,所述计算机存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行如第一方面任一所述的方法。
第五方面,本申请的实施例提供了一种包含指令的计算机程序产品,当所述指令在计算机上运行时,使得所述计算机执行如第一方面任一所述的方法。
第六方面,本申请的实施例提供了一种背光显示控制装置,包括:处理器和传输接口;所述处理器通过所述传输接口接收或发送数据;所述处理器被配置为调用存储在存储器中的程序指令,以使得所述控制装置执行如第一方面任一所述的方法。
本申请的实施例提供了一种显示控制的方法结合AI信息识别人眼感兴趣区域,在背光功率值额定范围内动态调整背光分布;将背光分区对应的像素级显著值统计求平均得到相应的分块级显著值;当预输出背光总功率值小于额定背光总功率值时,根据图像显著信息将剩余功率值分配到显著性区域提高背光亮度。当预输出背光功率值大于额定背光功率值时,降低非显著性区域的亮度而保持显著性区域的表现。
为了更清楚地说明本说明书披露的多个实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书披露的多个实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
下面对实施例或现有技术描述中所需使用的附图作简单地介绍。
图1为本申请实施例提出的一种背光显示的控制方法的系统架构图;
图2为本申请实施例提出的一种背光显示的控制方法的流程图;
图3为本申请实施例提出的一种背光显示的控制方法中的将传统显著性检测算法和AI信息识别算法融合处理图像的流程图;
图4A为本申请实施例提出的一种背光显示的控制方法中的输入图像示意图;
图4B为采用传统显著性检测算法检测图4A所示的输入图像后获得的第一检测结果示意图;
图4C为采用AI信息识别算法检测图4A所示的输入图像后获得的第二检测结果示意图;
图4D为第一检测结果和第二检测结果融合后获得融合图像示意图;
图5为描述亮度分布的直方图;
图6为预设的三条亮度权重分布曲线平铺图;
图7为图6所示预设的三条权重分布曲线加权平均后的显著值调整曲线示意图;
图8为本申请实施例提出的一种背光显示的控制方法中根据块显著值动态调整每个背光单元的背光功率值的流程图;
图9A为本申请实施例提出的一种背光显示的控制方法中的第一预设曲线示意图;
图9B为本申请实施例提出的一种背光显示的控制方法中的第二预设曲线示意图;
图9C为本申请实施例提出的一种背光显示的控制方法中的第三预设曲线示意图。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二\第三等”或模块A、模块B、模块C等,仅用于区别类似的对象,不代表针对对象的特定排序,可以理解地,在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
在以下的描述中,所涉及的表示步骤的标号,如S110、S120……等,并不表示一定会按此步骤执行,在允许的情况下可以互换前后步骤的顺序,或同时执行。
应当理解,在本申请中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
第一方案为一种背光调节的方法,首先检测图像画面中的主体物体位置确定主体区域,然后对画面做色调曲线调整,使得主体区域更亮或与之相对的背景区域更暗,然后分别对主体区域和背景区域做自适应动态对比度增强,最后两部分区域完成融合。此技术可以突出画面的主体区域。由于对主体区域和背景区域分别处理,两区域融合后在边缘处易出现等高线波纹(contour),同时时域上易出现画面闪烁的问题。
第二方案为一种基于功率值控制的背光调节的方法,在上电时初始化背光系统,采集画面中各分区的图像的亮度信号,根据图像的亮度信号计算预驱动电流值和预输出总功率值,并比较预输出总功率值与目标功率值的大小。当预输出总功率值小于目标功率值时,以预驱动电流点亮目标区域;当预输出总功率值大于目标功率值时,则将目标功率值与预 输出总功率值对比获得比值,将预驱动电流值按计算得到的比值相乘降低预驱动电流后点亮目标区域。该方案采用功率值控制方式,通过预输出总功率值与目标总功率值的对比控制驱动电流的大小,其控制方式与图像内容特性无关。当全图为暗场景时,预输出的总功率值低于目标总功率值,将会以高电流驱动背光,导致画面整体偏亮而对比度下降。而当全图为高亮场景时,预输出的总功率值高于目标总功率值,将会全局降低电流驱动背光,导致整体画面亮度下降而对比度也会相应下降。该方案忽略了图像内容特性只是全局性提高或降低背光亮度。
上述两种方案都没有考虑如何根据画面中人眼感兴趣区域动态分配背光功率值,当背光功率值低于额定功率值时,如何在功率值约束下利用剩余功率值,增大个别区域的背光功率值,增强个别区域的对比度和光照感,以提高图像质量的问题。
本申请实施例提出一种背光显示的控制方法,根据显著性信息进行背光功率值和亮度分布控制,通过人工智能(artificial intelligence,AI)信息识别图像中人眼感兴趣区域,动态分配背光功率值,保持或提高人眼感兴趣区域的对比度,实现在相同功率值约束下显示图像质量的提高。
本申请实施例提出的背光显示的控制方法适用于电视产品芯片的片上系统(system on chip,SOC)和PC显示器芯片的时间控制(time control,TCON)模块,关键控制功能现由硬件实现,同时也支持软件实现。
图1为本申请实施例提出的一种背光显示的控制方法的系统架构图。如图1所示,该系统架构包括图像后处理模块11、背光统计模块12、功率值控制模块13和背光模组14。
其中,图像后处理模块11获取输入图像,将传统显著性检测算法和AI信息识别算法处理输入图像获得融合图像;根据所述输入图像和融合图像的亮度分布获得计算背光模组14中每个背光单元对应的块显著值信息;块显著值用于指示输入图像的至少一个显著性区域。
在一个可能的实施方式中,背光模组14包括多个背光单元。
背光统计模块12根据输入图像统计该背光模组14的预输出背光总功率值,将预输出背光总功率值记为第一背光功率值。
功率值控制模块13根据第一背光功率值和额定背光总功率值的比较结果,自适应分配至少一个显著性区域对应的背光功率。例如,在预输出背光总功率值小于额定背光总功率值的情况下,将剩余功率值增加至至少一个显著性区域,获得的调整后背光模组14的预输出背光功率值;在预输出背光总功率值大于额定背光总功率值的情况下,降低非显著性区域的背光功率值,保持显著性区域的背光功率值。将调整后背光模组14的预输出背光功率值记为第二背光功率值;剩余功率值为第一背光功率值与额定背光总功率值之间的差值。
背光模组14根据第二背光功率值点亮所述每个背光单元对应的背光。
基于上述系统架构,本申请实施例提出的一种背光显示的控制方法,根据输入图像获得融合图像;识别融合图像中的显著性区域和非显著性区域;提高显著性区域的背光功率值;和/或降低非显著性区域的背光功率值。
图2为本申请实施例提出的一种背光显示的控制方法的流程图。如图2所示,本申请实施例提出的背光显示的控制方法的流程如下。
S11,获取输入图像,将传统显著性检测算法和AI信息识别算法融合处理输入图像获得融合图像;根据输入图像和融合图像的亮度分布获得与背光模组中每个背光单元向对应 区域的块显著值;每个区域对应的块显著值用于指示输入图像的显著性区域和/或非显著性区域。
S12,在对显著性区域和/或非显著性区域进行功率值控制之前,可以根据输入图像统计背光模组14的预输出背光总功率值,得到第一背光功率值。
S13,可以根据第一背光功率值和额定背光总功率值的比较结果,提高所述显著性区域的背光功率值;和/或降低所述非显著性区域的背光功率值。
在执行步骤S13时,在第一背光功率值小于额定背光总功率值的情况下,将剩余功率值增加至至少一个显著性区域,获得背光模组预输出的第二背光功率值,剩余功率值为第一背光功率值与额定背光总功率值之间的差值;在第一背光功率值大于额定背光总功率值的情况下,还可以降低至少一个非显著性区域的背光功率值,保持至少一个显著性区域的背光功率值,得到背光模组预输出的第二背光功率值。
步骤S14,由背光模组根据第二背光功率值点亮每个背光单元对应的背光。
本申请实施例提出的一种背光显示的控制方法根据AI检测信息有效识别人眼感兴趣区域,结合传统的显著性检测技术,自适应分配背光功率值调整背光模组的预输出背光功率值。当预输出背光功率值小于额定背光功率值时,通过增强显著性区域的对比度,提高画质表现;当预输出背光功率值大于额定背光功率值时,通过降低非显著性区域的亮度而保持显著性区域的表现,将人眼可感知的画质损失降到最低。背光模组通过采用本申请实施例提出的一种背光显示的控制方法,可以在功率值约束下提高感兴趣区域的对比度改善画质体验。
相比于第一方案,本申请实施例提出的一种背光显示的控制方法结合了功率值动态调整背光表现,且不会出现等高线波纹等副作用。
相比于第二方案,本申请实施例提出的一种背光显示的控制方法利用AI技术识别图像显著性区域,针对显著性区域自适应地分配背光功率值,不但能满足功率值约束,而且可有效提高显著性区域的画质表现,体现目标图像的内容特性。
图3为本申请实施例提出的一种背光显示的控制方法中的将传统显著性检测算法和AI信息识别算法融合处理图像的流程图。如图3所示,本申请实施例提出的一种背光显示的控制方法中的步骤S11可以通过执行步骤S21-S28进行图像处理以获得块显著值信息。
S21,根据传统显著性检测算法检测输入图像,获得第一检测结果。
传统显著性检测算法用于预测眼动点(Eye-fixation),即检测人眼在关注一幅自然场景图像的一段时间内的最感兴趣的区域。
在一个可以实现的实施方式中,获取输入图像,对该输入图使用传统显著性检测算法处理,得到相应的传统显著性图像。可以将传统显著性图像记为第一检测结果。
示例性地,采用传统显著性检测算法模型检测的输入图像如图4A所示,所获得的传统显著性图像如图4B所示,传统显著性图像具有输入图像的低层特征,例如颜色、强度和方向等。
在一个可以实现的实施方式中,传统显著性检测算法还可以根据输入图像的对比度、空间位置等低层先验信息结合人脸、文字等高层先验信息得到传统显著性图像。
示例性地,低层先验信息包括对比度先验信息、中心先验信息和背景先验信息。
示例性地,如果输入图像的一个像素或区域的亮度、颜色等特征与其他的区域相比明显不同,那么该像素或区域大概率是显著性区域。则该像素或区域与其他的区域相比的亮度、颜色等显著性特征为对比度先验信息,在传统显著性检测算法模型中可以提高对比度先验信息的权重值。
示例性地,根据对人眼的注视系统的分析,靠近输入图像的中心的区域大概率是显著性区域,该输入图像的中心特征为中心先验信息,在传统显著性检测算法模型中可以提高对比度先验信息的权重值。
示例性地,输入图像的背景区域通常和边界相连接,则与图像边界相连接的像素区域大概率是背景区域,背景区域的特征为背景先验信息,在传统显著性检测算法模型中可以提高对比度先验信息的权重值。
高层先验信息用于根据计算机视觉技术提取出一些高层的目标信息辅助显著性检测。高层的目标信息包括人脸,目标的边界,文字等信息。
示例性地,对于一幅包含人的图像,不管有没有特殊意图,人们都会习惯性地把注意力放在人脸或人的其他部位上,因此人为该图像的显著性目标。
S22,根据AI信息识别算法检测输入图像,获得显著性目标图像,可以将显著性目标图像记为第二检测结果。
显著性目标图像是AI信息识别算法检测出来的结果,能够较为准确的刻画出目标物体的轮廓。AI信息识别算法能够在一幅图中找出人眼最感兴趣的目标,准确地将该目标分割出来,并保留该目标的边缘信息。
在一个可以实现的实施方式中,可以将输入图像输入AI信息识别算法模型,检测输入图像中的显著性目标,提取该显著性目标,保留该显著性目标的边缘信息,输出显著性目标图像。例如,采用AI显著性检测算法模型处理如图4A所示的输入图像,输出的显著性目标图像如图4C所示。基于AI的显著目标检测算法降低了目标不完整的概率和误将背景标记为显著性区域的概率,同时保留原始图像中显著性目标的边缘信息。
可以理解的是,显著性图像指示的显著性区域与显著性目标图像指示显著性区域不相同,显著性目标图像是基于AI的显著性检测算法检测出来的结果,能够较为准确的刻画出显著性目标物体的轮廓。
S23,确定所述第一检测结果中高亮度像素的第一权重值和确定所述第二检测结果中高亮度像素的第二权重值。
在一个可以实现的实施方式中,可以统计第一检测结果中高亮度像素的比例值获得高亮度像素的第一权重值;统计第二检测结果中高亮度像素的比例值获得高亮度像素的第二权重值;其中高亮度像素的比例值为高亮度像素在总像素中的占比值。
S24,根据所述第一权重值和所述第二权重值调整高亮度像素的权重值,得到所述融合图像。
在一个可以实现的实施方式中,根据传统显著性图像中高亮度像素的第一权重值对显著性目标图像的高亮像素的第二权重值进行调整,得到融合图像,将融合图像记为第三显著性图像。
在一个可以实现的实施方式中,可以根据传统显著性图像调整显著性目标图像高亮像素占比,将调整后的高亮像素占比的值作为权重,融合第一检测结果和第二检测结果,得到第三显著性图像。
示例性地,调整如图4C所示的显著性目标图像中高亮像素占比,将调整后的高亮像素占比作为融合权重,融合如图4B所示的传统显著性图像和如图4C所示的显著性目标图像,获得如图4D所示的第三显著性图像。第三显著性图像融合了传统显著性图像的显著性区域和显著性目标图像的显著性区域,同时具有输入图像中目标物体的低层特征,例如颜色、强度和方向等,和较为准确的目标物体的轮廓。
S25,识别所述融合图像中的显著性区域和非显著性区域,包括以下步骤S251-254。
S251,确定输入图像的亮度分布;对输入图像上每一个像素点亮度值做统计,以获得亮度分布。
示例性地,可以统计输入图像上每一个像素点的RGB亮度值,获得亮度分布。
RGB是目前常用的一种彩色信息表达方式,它使用红(R)、绿(G)、蓝(B)三原色的亮度来定量表示颜色。通常情况下,R、G、B各有256级亮度,用数字表示为从0、1、2...直到255。
示例性地,还可以统计输入图像上每一个像素点的YUV亮度值,获得亮度分布。
YUV是被欧洲电视系统所采用的一种颜色编码方法,其中“Y”表示亮度值(luminance或luma),也就是灰度值;“U”和“V”表示的则是色度(chrominance或chroma),用于描述影像色彩及饱和度。由于采用YUV颜色编码时亮度值Y和色度值U、V是分离的,没有UV信息根据亮度信息(Y)一样可以显示完整的图像,图像是黑白的,因此可以统计输入图像上每一个像素点的Y亮度值,获得亮度分布。亮度值Y用数字表示为从0、1、2...直到255。
在一个可以实现的实施方式中,可以通过直方图来描述亮度分布。
图5为描述亮度分布的直方图,如图5所示,横坐标Y轴上的值指示亮度值,纵坐标COUNT轴上的值指示某个亮度值对应像素点的数量,例如图5指示了输入图像上亮度值为25的像素点的个数为100个,亮度值为75的像素点的个数为80。
S252,根据输入图像的亮度分布调整融合图像上每个像素的第一显著值,得到第二显著值,包括以下步骤S2521-S2523。
S2521,根据亮度分布与预设的亮度-权重曲线做加权平均,得到显著性增益-亮度分布的调整曲线。亮度分布包括低、中、高亮度的权重值。
在一个可以实现的实施方式中,首先根据输入图像的亮度分布情况分别计算低、中、高亮度的权重值。低、中、高亮度的权重值指示这张输入图像的亮暗分布比例。
示例性地,在全黑图像的亮度权重分布(低,中,高)为(100,0,0),指示图像中低亮度像素占画面的比例为100%;全白图像的低、中、高亮度的权重分布为(0,0,100)指示图像中高亮度像素占画面的比例为100%。其中的数值仅为举例,不做范围限定。
示例性地,可以将YUV的亮度值Y为50以下的像素归为低亮度,50-200之间的像素归为中亮度,200以上的像素归为高亮度,亮度值最大值为255。其中的数值仅为举例,不做范围限定。
然后,将低、中、高亮度的权重值分别与对应预设的三条权重分布曲线做加权平均,得到最终适配于输入图像的显著值-亮度分布的调整曲线。
图6为预设的三条权重分布曲线的平铺图;如图6所示,将预设的low、mid、hig权重分布曲线平铺设置于同一坐标系内,横坐标Y指示平铺亮度值,纵坐标gain指示显著值调整的增益强度,min为最小增益强度,max为最大增益强度;将低、中、高三个亮度权重值分别与预设的三条对应低,中,高亮度权重分布曲线做加权平均,如:
LUT[Y]=W0*low+W1*mid+W2*hig (1)
式1中LUT[Y]为显著值-亮度分布的调整曲线,W0为低亮度权重值,W1为中亮度权重值,W2为高亮度权重值。根据式1得到最终适配于输入图像的显著值-亮度分布的调整曲线LUT[Y]。
示例性地,在全黑图像的亮度权重分布(低,中,高)为(100,0,0),指示图像中低亮度场景的图像比例为100%,分别与如图6所示预设的三条权重分布曲线做加权平均,得到如图7所示的调整曲线LUT[Y],该调整曲线LUT[Y]的横坐标指示亮度值Y,纵坐标指示显著值调整增益强度gain,其中W0=100。其中的数值仅为举例,不做范围限定。
S2522,根据所述显著性增益-亮度分布的调整曲线得到每个像素的显著性增益值。该显著性增益值用于增大或减小每个像素对应的显著值。显著值是指单个像素点相对整幅图像的显著性程度。
在一个可以实现的实施方式中,可以根据输入图像的每个像素的亮度值在调整曲线LUT[Y]上查找,得到每个像素的亮度值对应的显著性增益值。
示例性地,在全黑图像上黑色像素的显著值为0;如果其中有一个像素点为白色,那该像素点的显著值为100。其中的数值仅为举例,不做范围限定。
可以理解的是,一个像素点相对整幅图像的其它像素的亮度对比度越高,则像素点的显著值越大,反之一个像素点相对整幅图像的其它像素的亮度对比度越小,则像素点的显著值越小。
S2523,根据显著性增益值增大或减小融合图像上每个像素的第一显著值,得到所述每个像素的第二显著值。
在一个可以实现的实施方式中,可以根据融合图像计算单个像素点相对整幅图像的显著性程度,获得该图像每一个像素点的显著值。将此显著值记为第一显著值;根据输入图像的每个像素的亮度值在调整曲线上LUT[Y]查找,得到每个像素点的显著性增益值,根据此增益值调整融合图像上对应的像素点的第一显著值,得到第二显著值。
在一个可以实现的实施方式中,可以根据显著性增益值调整第三显著性图像上对应的像素点的第一显著值,得到第二显著值,遍历完全图便得到第三显著性图像上每个像素的点的第二显著值。
S253,根据融合图像上设定区域内多个像素的第二显著值加和后计算平均值获得设定区域的块显著值;设定区域的数量为多个,设定区域为背光模组中的每个背光单元相对应区域。
在一个可以实现的实施方式中,可以根据背光模组的分区结构和融合图像上每个像素的第二显著值逐一计算获得每一个设定区域的块显著值。
背光模组14的分区结构包括多个背光单元,每个背光单元对应图像上的多个像素点。
在一个可能的实施方式中,可以将一个设定区域的多个像素点的每个像素的第二显著值加和后计算平均值,得到一个设定区域的块显著值;遍历背光模组得到每一个背光单元对应区域的块显著值。
可以理解的是,每一个背光单元内对应的多个像素点为相同的块显著值,每一个设定区域内的的多个像素点为相同的块显著值。
S254,根据设定区域的块显著值确定融合图像中的显著性区域;根据融合图像中的显著性区域确定非显著性区域。
示例性地,可以将块显著值为100以下的区域记为非显著性区域,块显著值为100以上的区域记为显著性区域。其中的数值仅为举例,不做范围限定。
至此,完成了背光模组中每个背光单元对应区域的块显著值的计算,根据块显著值可以区分输入图像的显著性区域和非显著性区域;接下来的步骤步骤S12-步骤S14针对显著性区域自适应地分配背光功率值,既满足功率值约束,还有效提高显著性区域的画质表现,体现目标图像的内容特性。下面对步骤S12-步骤S14进行详细论述。
执行步骤S12根据输入图像统计背光模组预输出的第一背光功率值,该步骤可以包括以下步骤S301-步骤S303。
步骤S301,在上电时初始化背光模组14,获取输入图像。
步骤S302,采集输入图像上各个像素点的亮度值,根据背光模组14的分区情况分别统计每个背光单元对应区域内的各个像素点的亮度总值,得到每个背光单元对应的背光功率值。
在一个可能的实施方式中,背光模组14的分区结构包括多个背光单元,每个背光单元在输入图像的对应区域内包括多个像素点,一个背光单元对应区域内的多个像素点的亮度值加和,得到该背光单元的亮度总值。
步骤S33,统计每个背光单元的背光功率值获得背光模组14预输出背光总功率值,将预输出背光总功率值记为第一背光功率值。
图8为本申请实施例提出的一种背光显示的控制方法中根据块显著值动态调整每个背光单元的背光功率值的流程图。如图8所示,步骤S13的流程包括以下步骤S401-S407。
S401,将预输出背光总功率值与额定背光总功率值做比较,若预输出背光总功率值小于额定背光总功率值,则执行步骤S402-S404,若预输出背光总功率值大于额定背光总功率值,则执行步骤S405-S406。
S402,将额定背光总功率值与预输出背光总功率值两者相减得到剩余功率值。
示例性地,如果预输出背光总功率值为300w,额定背光总功率值为350w,则可分配背光功率值为:350w-300w=50w。
S403,计算第三显著性图像中全部背光单元的块显著值的总和,求出每单位显著值可分配的背光功率值。
示例性地,计算第三显著性图像对应的全部背光单元的块显著值的总和25,则每单位显著值可分配的背光功率值为:50w/25=2w。
S404,将每单位显著值可分配的背光功率值乘以每个背光单元的块显著值,获得每个背光单元的背光增量,与预输出背光总功率值相加,得到调整后的背光总功率值。
示例性地,假设背光模组14中只有3个背光单元投影在输入图像对应的显著性区域上,第一背光单元的块显著值为5,第二背光单元的块显著值为8,第三背光单元的块显著值为12,则第一背光单元的背光增量为:2*5=10w;第二背光单元的背光增量为:2*8=16w;第三背光单元的背光增量为:2*12=24w;将总的背光增量与预输出背光总功率值相加,得到调整后的背光总功率值,则调整后背光总功率值为:300w+10w+16w+24w=350w。
S405,若预输出背光总功率值大于额定背光总功率值,则根据每个背光单元的块显著值和平均亮度值的计算得到每一个背光单元的背光下降强度。
在一个可能的实施方式中,根据每个背光单元的块显著值和平均亮度值的计算得到每一个背光单元的背光下降强度,可以通过以下步骤实现:
S4051,根据输入图像上对应每个背光单元的平均亮度值查找如图9A所示的第一预设曲线得到亮度增益值gain_avg;该亮度增益值指示亮度减小的程度。第一预设曲线为亮度-亮度增益值调整曲线。
S4052,根据每个背光单元的块显著值查找如图9B所示的第二预设曲线得到显著性增益值gain_sal;该显著性增益值指示显著性减小的程度。第二预设曲线为显著值-显著值增益值调整曲线。
S4053,根据亮度增益值和显著性增益值计算每个背光单元的背光下降强度:Gain=gain_avg*gain_sal。该背光下降强度指示背光功率值减小的程度。
S406,根据背光下降强度和背光调整值降低非显著性区域的背光功率值,保持显著性区域的背光功率值,得到背光模组预输出的第二背光功率值。
在一个可能的实施方式中,步骤S406包括:
S4061,根据每个背光单元的背光功率值bl查找如图9C所示的第三预设曲线,得到背光调整值bl_delta;第三预设曲线为背光功率值-背光增益值调整曲线。
S4062,根据背光调整值bl_delta和背光下降强度Gain计算得到调整后的背光总功率值:Bl_new=bl+bl_delta*Gain;
S4063,将非显著性区域的背光功率值减小至Bl_new;
S4064,根据多个显著性区域的背光功率值bl_delta和调整后的非显著性区域背光功率值Bl_new,得到背光模组预输出的第二背光功率值。
S407,将第二背光功率值与额定背光总功率值再一次做比较,若第二背光功率值大于额定背光总功率值,则根据两者比值等比例降低每个背光单元的背光功率值;若调整后的背光总功率值小于或等于额定背光总功率值,则输出第二背光功率值。
最后,根据第二背光功率值点亮背光模组的对应背光单元的背光。
本申请实施例提出的一种背光显示的控制方法根据AI检测信息有效识别人眼感兴趣区域,结合传统的显著性检测技术,自适应分配背光功率值调整背光模组的预输出背光功率值。当预输出背光功率值小于额定背光功率值时,通过增强显著性区域的对比度,提高画质表现;当预输出背光功率值大于额定背光功率值时,通过降低非显著性区域的亮度而保持显著性区域的表现,将人眼可感知的画质损失降到最低。背光模组通过采用本申请实施例提出的一种背光显示的控制方法,可以在功率值约束下提高感兴趣区域的对比度,改善画质体验。
基于上述实施例中的方法,本申请实施例提供了一种背光显示控制装置,包括:图像后处理模块,用于根据输入图像获得融合图像;识别融合图像中的显著性区域和非显著性区域;功率值控制模块,用于提高显著性区域的背光功率值;和/或降低非显著性区域的背光功率值。
其中,图像后处理模块包括:区域识别单元,用于确定输入图像的亮度分布;根据亮度分布调整融合图像上每个像素的第一显著值,得到第二显著值;根据融合图像上设定区域内多个像素的第二显著值加和后计算平均值获得设定区域的块显著值;设定区域为背光模组中的每个背光单元相对应区域,数量为多个;根据设定区域的块显著值确定融合图像中的显著性区域;根据融合图像中的显著性区域确定非显著性区域。
其中,功率值控制模块包括:第一背光统计单元,用于根据输入图像的亮度分布获得第一背光功率值;第一控制单元,用于在第一背光功率值小于额定背光总功率值的情况下,将剩余功率值增加至显著性区域;剩余功率值为第一背光功率值与额定背光总功率值之间的差值;第二控制单元,用于在第一背光功率值大于额定背光总功率值的情况下,降低非显著性区域的背光功率值,保持显著性区域的背光功率值。
其中,功率值控制模块还包括:第二背光统计单元,用于确定第二背光功率值;第三控制单元,用于在第二背光功率值大于额定背光功率值的条件下,根据第二背光功率值与额定背光功率值的比值降低非显著性区域背光功率值。
基于上述实施例中的方法,本申请实施例提供了一种电子设备,包括:至少一个存储器,用于存储程序;至少一个处理器,用于执行存储器存储的程序;和背光模组,用于点亮背光,背光模组通过传输接口与处理器连接,背光模组包括多个背光单元;当存储器存储的程序被执行时,处理器用于执行如第一方面任一项的方法,使得多个背光单元点亮对应区域的背光。
基于上述实施例中的方法,本申请实施例提供了一种计算机存储介质,计算机存储介质中存储有指令,当指令在计算机上运行时,使得计算机执行如第一方面任一项的方法。
基于上述实施例中的方法,本申请实施例提供了一种包含指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行如第一方面任一项的方法。
基于上述实施例中的方法,本申请的实施例提供了一种背光显示控制装置,包括:处理器和传输接口;所述处理器通过所述传输接口接收或发送数据;所述处理器被配置为调用存储在存储器中的程序指令,以使得所述控制装置执行如第一方面任一所述的方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
此外,本申请实施例的各个方面或特征可以实现成方法、装置或使用标准编程和/或工程技术的制品。本申请中使用的术语“制品”涵盖可从任何计算机可读器件、载体或介质访问的计算机程序。例如,计算机可读介质可以包括,但不限于:磁存储器件(例如,硬盘、 软盘或磁带等),光盘(例如,压缩盘(compact disc,CD)、数字通用盘(digital versatile disc,DVD)等),智能卡和闪存器件(例如,可擦写可编程只读存储器(erasable programmable read-only memory,EPROM)、卡、棒或钥匙驱动器等)。另外,本文描述的各种存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读介质。术语“机器可读介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
应当理解的是,在本申请实施例的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者接入网设备等)执行本申请实施例各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。
Claims (21)
- 一种背光显示的控制方法,其特征在于,所述方法包括,根据输入图像获得融合图像;识别所述融合图像中的显著性区域和非显著性区域;提高所述显著性区域的背光功率值;和/或降低所述非显著性区域的背光功率值。
- 根据权利要求1所述的背光显示的控制方法,其特征在于,所述根据输入图像获得融合图像,包括:根据传统显著性检测算法检测所述输入图像获得第一检测结果;根据人工智能(artificial intelligence,AI)信息识别算法检测所述输入图像获得第二检测结果;根据所述第一检测结果和所述第二检测结果获得所述融合图像。
- 根据权利要求2所述的背光显示的控制方法,其特征在于,所述根据传统显著性检测算法检测所述输入图像获得第一检测结果,包括:根据所述传统显著性检测算法检测所述输入图像中的低层先验信息和/或高层先验信息,输出所述第一检测结果;所述低层先验信息包括对比度先验信息或空间位置先验信息中的至少一项;所述高层先验信息包括人脸、文字、物体中的至少一项。
- 根据权利要求2或3所述的背光显示的控制方法,其特征在于,所述根据AI信息识别算法检测所述输入图像获得第二检测结果,包括:根据所述AI信息识别算法检测所述输入图像中的显著性目标,将所述显著性目标分割,保留所述显著性目标的边缘信息,获得所述第二检测结果。
- 根据权利要求2至4任一项所述的背光显示的控制方法,其特征在于,所述根据所述第一检测结果和所述第二检测结果获得所述融合图像,包括:确定所述第一检测结果中高亮度像素的第一权重值;确定所述第二检测结果中高亮度像素的第二权重值;根据所述第一权重值和所述第二权重值调整高亮度像素的权重值,得到所述融合图像。
- 根据权利要求1至5任一项所述的背光显示的控制方法,其特征在于,所述识别所述融合图像中的显著性区域和非显著性区域,包括:确定所述输入图像的亮度分布;根据所述亮度分布调整所述融合图像上每个像素的第一显著值,得到第二显著值;根据所述融合图像上设定区域内多个像素的第二显著值加和后计算平均值获得所述设定区域的块显著值;所述设定区域为背光模组中的每个背光单元相对应区域,数量为多个;根据所述设定区域的块显著值确定所述融合图像中的所述显著性区域;根据所述融合图像中的所述显著性区域确定所述非显著性区域。
- 根据权利要求6的背光显示的控制方法,其特征在于,所述根据所述亮度分布调整所述融合图像上每个像素的第一显著值,得到第二显著值,包括:确定所述输入图像的亮度分布;根据所述亮度分布与预设的亮度-权重曲线做加权平均,得到显著性增益-亮度分布的调整曲线;所述亮度分布包括低、中、高亮度的权重值;根据所述显著性增益-亮度分布的调整曲线得到每个像素的显著性增益值;所述显著性增益值用于增大或减小每个像素的显著值;所述每个像素的显著值为单个像素点相对输入图像的显著性程度;根据所述显著性增益值增大或减小所述融合图像上每个像素的第一显著值,得到所述每个像素的第二显著值。
- 根据权利要求1至7任一项所述的背光显示的控制方法,其特征在于,所述提高所述显著性区域的背光功率值,包括:根据所述输入图像确定第一背光功率值;在所述第一背光功率值小于额定背光总功率值的情况下,将剩余功率值增加至所述显著性区域;所述剩余功率值为所述第一背光功率值与所述额定背光总功率值之间的差值。
- 根据权利要求1至8任一项所述的背光显示的控制方法,其特征在于,所述降低非显著性区域的背光功率值,包括:根据所述输入图像确定所述第一背光功率值;在所述第一背光功率值大于额定背光总功率值的情况下,降低所述非显著性区域的背光功率值,保持所述显著性区域的背光功率值。
- 根据权利要求8或9所述的背光显示控制方法,其特征在于,所述根据所述输入图像确定第一背光功率值,包括,根据所述输入图像的当前背光功率值求和确定所述第一背光功率值。
- 根据权利要求9所述的背光显示的控制方法,其特征在于,所述降低非显著性区域的背光功率值,包括:根据所述输入图像的平均亮度值和第一预设曲线得到亮度增益值;所述第一预设曲线为亮度-亮度增益值调整曲线;根据所述输入图像的块显著值和第二预设曲线得到显著性增益值;所述第二预设曲线为显著值-显著值增益值调整曲线;根据所述亮度增益值和所述显著性增益值确定背光下降强度;根据所述非显著性区域当前的背光功率值和所述第三预设曲线得到背光调整值;所述第三预设曲线为背光功率值-背光增益值调整曲线;根据所述背光调整值和所述背光下降强度降低所述非显著性区域的背光功率值。
- 根据权利要求1至11任一项所述的背光显示的控制方法,其特征在于,所述降低非显著性区域的背光功率值,包括:确定第二背光功率值;在所述第二背光功率值大于所述额定背光功率值的条件下,根据所述第二背光功率值与所述额定背光功率值的比值降低所述非显著性区域背光功率值。
- 一种背光显示的控制装置,其特征在于,所述装置包括:图像后处理模块,用于根据输入图像获得融合图像;识别融合图像中的显著性区域和非显著性区域;和功率值控制模块,用于提高显著性区域的背光功率值;和/或降低非显著性区域的背光功率值。
- 根据权利要求13所述的背光显示的控制装置,其特征在于,所述图像后处理模块包括:图像融合单元,用于根据传统显著性检测算法检测所述输入图像获得第一检测结果;根据人工智能(artificial intelligence,AI)信息识别算法检测所述输入图像获得第二检测结果;根据所述第一检测结果和所述第二检测结果获得所述融合图像。
- 根据权利要求13或14所述的背光显示的控制装置,其特征在于,所述图像后处理模块包括:区域识别单元,用于确定所述输入图像的亮度分布;根据所述亮度分布调整所述融合图像上每个像素的第一显著值,得到第二显著值;根据所述融合图像上设定区域内多个像素的第二显著值加和后计算平均值获得所述设定区域的块显著值;所述设定区域为背光模组中的每个背光单元相对应区域,数量为多个;根据所述设定区域的块显著值确定所述融合图像中的所述显著性区域;根据所述融合图像中的所述显著性区域确定所述非显著性区域。
- 根据权利要求13至15任一项所述的背光显示的控制装置,其特征在于,所述功率值控制模块包括:第一背光统计单元,用于根据输入图像的亮度分布获得第一背光功率值;第一控制单元,用于在所述第一背光功率值小于额定背光总功率值的情况下,将剩余功率值增加至所述显著性区域;所述剩余功率值为所述第一背光功率值与所述额定背光总功率值之间的差值;第二控制单元,用于在所述第一背光功率值大于额定背光总功率值的情况下,降低所述非显著性区域的背光功率值,保持所述显著性区域的背光功率值。
- 根据权利要求13至16任一项所述的背光显示的控制装置,其特征在于,所述功率值控制模块还包括:第二背光统计单元,用于确定第二背光功率值;第三控制单元,用于在所述第二背光功率值大于所述额定背光功率值的条件下,根据所述第二背光功率值与所述额定背光功率值的比值降低所述非显著性区域背光功率值。
- 一种背光显示的控制装置,其特征在于,包括:处理器和传输接口;所述处理器通过所述传输接口接收或发送数据;所述处理器被配置为调用存储在存储器中的程序指令,以使得所述控制装置实现如权利要求1至12任一项所述的背光显示控制方法。
- 一种电子设备,其特征在于,包括:至少一个存储器,用于存储程序;至少一个处理器,用于执行所述存储器存储的程序;和背光模组,用于点亮背光,所述背光模组通过传输接口与所述处理器连接,所述背光模组包括多个背光单元;当所述存储器存储的程序被执行时,所述处理器用于执行如权利要求1-12任一项所述的方法,使得所述多个背光单元点亮对应区域的背光。
- 一种计算机存储介质,所述计算机存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行如权利要求1-12任一所述的方法。
- 一种包含指令的计算机程序产品,当所述指令在计算机上运行时,使得所述计算机执行如权利要求1-12任一所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21957043.9A EP4400990A4 (en) | 2021-09-15 | 2021-09-15 | BACKLIGHT DISPLAY CONTROL METHOD AND APPARATUS |
PCT/CN2021/118554 WO2023039753A1 (zh) | 2021-09-15 | 2021-09-15 | 一种背光显示的控制方法及装置 |
CN202180102227.5A CN117940965A (zh) | 2021-09-15 | 2021-09-15 | 一种背光显示的控制方法及装置 |
US18/604,703 US12334026B2 (en) | 2021-09-15 | 2024-03-14 | Backlight display control method and apparatus for adjusting a backlight power value of a region |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/118554 WO2023039753A1 (zh) | 2021-09-15 | 2021-09-15 | 一种背光显示的控制方法及装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/604,703 Continuation US12334026B2 (en) | 2021-09-15 | 2024-03-14 | Backlight display control method and apparatus for adjusting a backlight power value of a region |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023039753A1 true WO2023039753A1 (zh) | 2023-03-23 |
Family
ID=85602238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/118554 WO2023039753A1 (zh) | 2021-09-15 | 2021-09-15 | 一种背光显示的控制方法及装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US12334026B2 (zh) |
EP (1) | EP4400990A4 (zh) |
CN (1) | CN117940965A (zh) |
WO (1) | WO2023039753A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2372686A1 (en) * | 2010-03-09 | 2011-10-05 | Vestel Elektronik Sanayi ve Ticaret A.S. | A method for local dimming boost using salient features |
US20120288139A1 (en) * | 2011-05-10 | 2012-11-15 | Singhar Anil Ranjan Roy Samanta | Smart backlights to minimize display power consumption based on desktop configurations and user eye gaze |
JP2012242672A (ja) * | 2011-05-20 | 2012-12-10 | Canon Inc | 液晶表示装置及びその制御方法 |
CN104574366A (zh) * | 2014-12-18 | 2015-04-29 | 华南理工大学 | 一种基于单目深度图的视觉显著性区域的提取方法 |
CN112767385A (zh) * | 2021-01-29 | 2021-05-07 | 天津大学 | 基于显著性策略与特征融合无参考图像质量评价方法 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5165788B1 (ja) * | 2011-12-26 | 2013-03-21 | シャープ株式会社 | 映像表示装置 |
US9754362B2 (en) | 2013-05-31 | 2017-09-05 | Sony Corporation | Image processing apparatus, image processing method, and program |
CN105390096A (zh) | 2015-11-24 | 2016-03-09 | 深圳创维-Rgb电子有限公司 | 一种区域调光的过驱控制方法及其装置 |
CN106339196B (zh) | 2016-08-31 | 2019-03-15 | 深圳市华星光电技术有限公司 | DeMura表的数据压缩、解压缩方法及Mura补偿方法 |
US11888002B2 (en) * | 2018-12-17 | 2024-01-30 | Meta Platforms Technologies, Llc | Dynamically programmable image sensor |
US11869449B2 (en) * | 2019-06-13 | 2024-01-09 | Saturn Licensing Llc | Image processing device, image processing method, display device having artificial intelligence function, and method of generating trained neural network model |
-
2021
- 2021-09-15 WO PCT/CN2021/118554 patent/WO2023039753A1/zh active Application Filing
- 2021-09-15 EP EP21957043.9A patent/EP4400990A4/en active Pending
- 2021-09-15 CN CN202180102227.5A patent/CN117940965A/zh active Pending
-
2024
- 2024-03-14 US US18/604,703 patent/US12334026B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2372686A1 (en) * | 2010-03-09 | 2011-10-05 | Vestel Elektronik Sanayi ve Ticaret A.S. | A method for local dimming boost using salient features |
US20120288139A1 (en) * | 2011-05-10 | 2012-11-15 | Singhar Anil Ranjan Roy Samanta | Smart backlights to minimize display power consumption based on desktop configurations and user eye gaze |
JP2012242672A (ja) * | 2011-05-20 | 2012-12-10 | Canon Inc | 液晶表示装置及びその制御方法 |
CN104574366A (zh) * | 2014-12-18 | 2015-04-29 | 华南理工大学 | 一种基于单目深度图的视觉显著性区域的提取方法 |
CN112767385A (zh) * | 2021-01-29 | 2021-05-07 | 天津大学 | 基于显著性策略与特征融合无参考图像质量评价方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4400990A4 * |
Also Published As
Publication number | Publication date |
---|---|
US20240221691A1 (en) | 2024-07-04 |
EP4400990A4 (en) | 2024-10-30 |
CN117940965A (zh) | 2024-04-26 |
EP4400990A1 (en) | 2024-07-17 |
US12334026B2 (en) | 2025-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5334402B2 (ja) | 映像のチラつきを改善するディスプレイ装置および方法 | |
US8860718B2 (en) | Method for converting input image data into output image data, image conversion unit for converting input image data into output image data, image processing apparatus, display device | |
JP5081973B2 (ja) | ヒストグラムの操作によるディスプレイ光源管理の為の方法およびシステム | |
RU2413383C2 (ru) | Блок цветового преобразования для уменьшения окантовки | |
JP5411848B2 (ja) | 画像トーンスケール設計のための方法及びシステム | |
US20090317017A1 (en) | Image characteristic oriented tone mapping for high dynamic range images | |
JP2010535352A (ja) | 画像特性を調節するための方法 | |
CN101399023A (zh) | 控制背光模块的方法、背光控制器及相应的显示装置 | |
US20200320941A1 (en) | Method of enhancing contrast and a dual-cell display apparatus | |
US11783450B2 (en) | Method and device for image processing, terminal device and storage medium | |
CN108962185A (zh) | 一种降低显示画面亮度的方法、其装置及显示装置 | |
TWI835280B (zh) | 一種色調映射方法、設備及系統 | |
CN114340102A (zh) | 灯带控制方法、装置、显示设备、系统以及存储介质 | |
CN111311500B (zh) | 一种对图像进行颜色还原的方法和装置 | |
CN116959381A (zh) | 图像增强显示的方法、显示设备及电子设备 | |
CN116485679A (zh) | 低照度增强处理方法、装置、设备及存储介质 | |
CN115379208B (zh) | 一种摄像头的测评方法及设备 | |
JPH1166301A (ja) | カラー画像分類方法及び装置及びこの方法を記録した記録媒体 | |
CN112488933B (zh) | 一种视频的细节增强方法、装置、移动终端和存储介质 | |
WO2023039753A1 (zh) | 一种背光显示的控制方法及装置 | |
CN115760652B (zh) | 扩展图像动态范围的方法和电子设备 | |
JP2009010636A (ja) | 適応ヒストグラム等化方法及び適応ヒストグラム等化装置 | |
CN103685972A (zh) | 影像优化方法以及使用此方法的系统 | |
CN116721257A (zh) | 图像处理方法、电子设备和计算机可读存储介质 | |
CN101568955A (zh) | 确定一个显示图像的lcd面板的背光亮度值的方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 202180102227.5 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021957043 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021957043 Country of ref document: EP Effective date: 20240409 |