Nothing Special   »   [go: up one dir, main page]

CN108830823A - The full-reference image quality evaluating method of frequency-domain analysis is combined based on airspace - Google Patents

The full-reference image quality evaluating method of frequency-domain analysis is combined based on airspace Download PDF

Info

Publication number
CN108830823A
CN108830823A CN201810207410.1A CN201810207410A CN108830823A CN 108830823 A CN108830823 A CN 108830823A CN 201810207410 A CN201810207410 A CN 201810207410A CN 108830823 A CN108830823 A CN 108830823A
Authority
CN
China
Prior art keywords
image
reference picture
frequency
completion
distorted image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810207410.1A
Other languages
Chinese (zh)
Other versions
CN108830823B (en
Inventor
郑元林
唐梽森
廖开阳
王玮
于淼淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201810207410.1A priority Critical patent/CN108830823B/en
Publication of CN108830823A publication Critical patent/CN108830823A/en
Application granted granted Critical
Publication of CN108830823B publication Critical patent/CN108830823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses the full-reference image quality evaluating method for combining frequency-domain analysis based on airspace, it is first carried out to the reference and distorted image progress color space conversion in database;Secondly it executes and extracts reference picture and distorted image airspace gradient and frequency domain phase property to calculate global max architecture characteristic similarity;Then it executes and calculates frequency domain texture and spatial frequency features similitude, airspace color characteristic similitude, and a 9-D feature vector is constituted in conjunction with global max architecture characteristic similarity;Regression model is established come fusion feature vector and subjectivity MOS value according to the RF that stands abreast at random, and is trained;The 9-D feature vector for extracting testing image is finally executed, is input to the regression model trained to realize high-precision forecast picture quality, is evaluated to complete Objective image quality.The present invention realizes that full-reference image quality carries out high-precision and objectively evaluates, and can keep higher consistency with human-eye visual characteristic.

Description

The full-reference image quality evaluating method of frequency-domain analysis is combined based on airspace
Technical field
The invention belongs to image procossing and image quality evaluating method technical fields, are related to one kind based on airspace and combine frequency domain The full-reference image quality evaluating method of analysis.
Background technique
With the rapid development of multimedia, image procossing and the communication technology, digital picture is as most intuitive effective one One of kind information carrier, transmits important visual signal.At the same time, the equipment such as associated picture acquisition and processing is universal, Such as:Digital camera, computer and smart phone describe the things of objective reality vividly, visually for user.
What the Graphical User of high quality was always craved for, however, image is in acquisition, storage, compression, transmission and the mistake restored Cheng Zhong will lead to image and the phenomenon that various distortions and quality degradation occur, such as due to various inevitable factors:It shot The problems such as mechanical shaking in journey, exposure is uneven, can all cause deteriroation of image quality.Therefore, in Image Acquisition, coding compression, network The visual perception quality tool that can loyal, effectively evaluate output image in the fields such as transmission has very great significance.
In recent years, full-reference image quality evaluation has become a hot topic of research, and many researchers are studying this skill Art.Currently, most of existing methods all be using below based on top-down image quality evaluation frame (W.Zhou, A.C.Bovik,H.R.Sheikh,and E.P. Simoncelli,"Image quality assessment:from error visibility to structural similarity,"IEEE Transactions on Image Processing, vol.13,no.4,pp. 600-612,2004.):Firstly, the luminance information of reference picture and corresponding distorted image is extracted respectively, Three indexs such as contrast information and structural information;Secondly, calculating the similitude of three indexs, brightness similitude, comparison are obtained Spend similitude and structural similarity;Finally, three similarity features of average weighted, obtain the mass fraction of distorted image.With this Under background premised on kind is theoretical, visual signature weight (Z.Wang and Q.Li, " is assigned according to picture material Information Content Weighting for Perceptual Image Quality Assessment,"IEEE Transactions on Image Processing,vol.20,no.5,pp. 1185-1198,2011.).In addition, also having one A little methods extract a global characteristics to whole image in airspace and realize quality evaluation, but this method cannot be used to evaluate coloured silk Chromatic graph picture.(W. Xue,L.Zhang,X.Mou,and A.C.Bovik,"Gradient Magnitude Similarity Deviation:A Highly Efficient Perceptual Image Quality Index,"IEEE Transactions on Image Processing,vol.23,no.2,pp.684-695,2014.)。
In some documents occurred recently, image structure information is described using frequency domain character, is further improved image matter Measure evaluation model.(L.Zhang,L.Zhang,X.Mou,and D. Zhang,"FSIM:A Feature Similarity Index for Image Quality Assessment,"IEEE Transactions on Image Processing, vol.20,no.8,pp. 2378-2386,2011.L.Zhang,Y.Shen,and H.Li,"VSI:A Visual Saliency-Induced Index for Perceptual Image Quality Assessment,"IEEE Transactions on Image Processing,vol.23,no.10,pp.4270-4281,2014.)
Currently, the image quality evaluating method for being mostly based on characteristic similarity calculating is all in airspace or frequency domain to figure Image quality amount is quantified, but single airspace or frequency domain character information is only utilized in these methods, and it is same to have ignored human eye vision When with airspace and frequency domain processing information characteristic, it is lower so as to cause evaluation result precision.
Summary of the invention
The purpose of the present invention is to provide a kind of full-reference image quality evaluation sides that frequency-domain analysis is combined based on airspace Method can establish model by reference to the airspace and frequency domain character Similarity measures of image and distorted image, realize accurate Distorted image prediction of quality.
The present invention uses following technical scheme to achieve the above object:
The full-reference image quality evaluating method that frequency-domain analysis is combined based on airspace, is included the following steps:
Step 1, in database reference picture and distorted image be transformed into YIQ color space by RGB color, Separate the colouring information of image with luminance information;
Step 2, through after the completion of step 1, according to the YIQ color space of acquisition, extract reference picture and distorted image respectively Airspace gradient amplitude feature in the channel Y, and be normalized;
Step 3, through after the completion of step 1, the frequency domain phase successively extracted in reference picture and distorted image in the channel Y is special Sign;
Step 4, the normalized gradient amplitude characteristic and phase property obtained using step 2 and step 3, in each pixel Point position retains the larger value of two features to establish global structure figure, and calculates the overall situation of reference picture and corresponding distorted image Structural similarity;
Step 5, through after the completion of step 1, successively extract the texture maps of reference picture and distorted image in the channel Y, and count Calculate the texture paging of reference picture and distorted image;
Step 6, through after the completion of step 1, in the channel Y reference picture and distorted image extract spatial frequency spy respectively Sign, and calculate the spatial frequency similitude of reference picture and distorted image;
Step 7, the secondary color for calculating reference picture and distorted image in the channel I again and the channel Q through distinguishing after the completion of step 1 Similitude is spent, and the method by being multiplied obtains color similarity;
Step 8, through step 4, step 5, step 6 and step 7 after the completion of, each feature that will acquire by random forest is similar Property fusion in regression model, and subjective assessment score MOS value is also entered into regression model and is trained, it is trained Model is used directly to accurately predict the quality of image to be evaluated.
As a further solution of the present invention, step 1 specifically includes following steps:
To in database reference picture and distorted image carry out color space conversion, by RGB color space conversion to YIQ color space;
For the sub-picture in database, its color space conversion is expressed as form:
In formula (1):The image size before obtained image size and color space conversion after color space conversion It is identical, realize the channel luminance information Y and chrominance information I, the Q channel separation of image.
As a further solution of the present invention, step 2 specifically includes following steps:
Step 2.1, through according to obtained YIQ color space, extracting reference picture and distortion map respectively after the completion of step 1 As the airspace gradient amplitude feature in the channel Y, the specific method is as follows:
Using the Prewitt operator with 3*3 window level and vertical two components respectively to reference picture and distortion map As carrying out convolution algorithm, the extraction of structure feature is carried out;
It wherein, is the coordinate of pixel in image for piece image f (x), x, it is as follows to the method for image convolution:
In formula (2):Gx(x) horizontal gradient range value, G are indicatedy(x) vertical gradient range value is indicated;
Step 2.2, through calculating separately the gradient amplitude of reference picture and distorted image according to formula (3) after the completion of step 2.1 Value, specific algorithm are as follows:
In formula (3):G indicates the gradient magnitude of a sub-picture;
Step 2.3, through after the completion of step 2.2, normalized gradient range value, method is as follows:
Gnorm=G/Gmax(4);
In formula (4):GmaxIndicate the maximum value of gradient magnitude, GnormIndicate normalized gradient magnitude.
As a further solution of the present invention, step 3 specifically includes following steps:
Step 3.1, through according to obtained YIQ color space, extracting reference picture and distortion map respectively after the completion of step 1 As the frequency domain phase property in the channel Y;
Phase property extraction is to carry out convolution algorithm to image in a frequency domain using log-Gabor filter;For a width Image f (x, y) (reference picture or distorted image), x, y are the coordinate of pixel in image, its 2D-Log Gabor transformation indicates For following form:
In formula (5):ω is frequency, and ψ is phase, σxAnd σyRespectively the window width of horizontal direction and vertical direction, d are Zoom factor, in order to ensure
Step 3.2, through being converted to 2D-Log Gabor after the completion of step 3.1, using forming orthogonal Even (en) With Odd (on) balanced-filter calculates response energy, in direction θjOn response energy balane it is as follows:
In formula (6):N is scale size;
Local amplitude value is calculated again, is calculated by formula (7):
In formula (7):It is local amplitude value,Even symmetric filter response,It is odd symmetry filter The response of wave device;
Step 3.3, through calculating frequency domain phase information after the completion of step 3.2, method is as follows:
In formula (8):N is scale size, and j is direction size, and λ=0.0001 is zero for preventing denominator from occurring.
As a further solution of the present invention, step 4 specifically includes following steps:
Step 4.1, it is established according to the normalized gradient feature and phase property that are obtained after the completion of step 2 and step 3 complete Office's max architecture figure chooses the larger value of the phase size and normalized gradient size on each pixel position as maximum knot Structure characteristic point, the specific method is as follows:
LS (i, j)={ Gnorm(i,j),PC(i,j)} (9);
In formula (9):(i, j) is the position of pixel on a sub-picture, and LS (i, j) is global max architecture figure;
Step 4.2, through utilizing the reference picture overall situation max architecture figure LS of acquisition after the completion of step 4.1riAnd distorted image Global max architecture figure LSdCalculate global max architecture similitude, the specific method is as follows:
In formula (10):LSrIt is the mean value of reference picture overall situation max architecture figure, LSdIt is distorted image overall situation max architecture figure Mean value, N is image slices vegetarian refreshments sum.
As a further solution of the present invention, step 5 specifically includes following steps:
Step 5.1, through according to obtained YIQ color space, being distinguished using log-Gabor filter after the completion of step 1 Reference picture and distorted image are extracted in the frequency domain textural characteristics in the channel Y;
According to log-Gabor filter described in formula (5), choose with 4 scales and comprising 0 °, 45 °, 90 °, The filter group of 135 ° of four directions carries out convolution to image, and calculates the amplitude for the complex matrix that convolution generates under each scale, That is energy diagram, method are as follows:
In formula (11):The size of a, b representing matrix,It is the real part of complex matrix,It is complex matrix Imaginary part, Ea,b(x, y) is the amplitude of complex matrix, i.e. energy diagram;
Step 5.2, through calculating the similitude between two images using chi-Square measure after the completion of step 5.1;
For the energy diagram E under s-th of scale of reference pictureR(i) and s-th of scale of distorted image under energy diagram ED(i) Between similitude be following form:
In formula (12):I is the position of pixel,It is reference picture and distorted image under s-th of scale Texture paging.
As a further solution of the present invention, step 6 specifically includes following steps:
Step 6.1, after the completion of step 1, reference picture and distorted image are transformed into discrete cosine domain in the channel Y, And count corresponding discrete cosine domain coefficient under three frequency ranges;
For low frequency region RL, mid-frequency region RMWith high-frequency region RHDiscrete cosine coefficient statistical it is as follows:
Formula (13), in (14) and formula (15):P (u, v) is discrete cosine coefficient, and (u, v) is pixel position;
Step 6.2, it after the completion of step 6.1, is calculated according to the discrete cosine coefficient in the low, medium and high frequency domain of acquisition The spatial frequency similitude of reference picture and distorted image, method are as follows:
Formula (16), in (17) and (18):φRL, φRMAnd φRHIt is reference picture low frequency respectively, intermediate frequency and high-frequency region Discrete cosine coefficient, φDL, φDMAnd φDHIt is distorted image low frequency respectively, the discrete cosine coefficient of intermediate frequency and high-frequency region, N is Pixel sum, C1(C1=0.6), C2(C2=2000) and C3(C3It=1.7) is that normal amount avoids out for stablizing denominator It is now zero.
As a further solution of the present invention, step 7 specifically includes following steps:
Step 7.1, through according to obtained YIQ color space, calculating separately reference picture and distortion map after the completion of step 1 As the chromaticity in two chrominance channels of I and Q;
For reference picture, any pixel point position x and corresponding distorted image are corresponding in the channel I (Q) in the channel I (Q) The color characteristic similitude of vegetarian refreshments position x is by following form:
In formula (19) and formula (20):IR(x) and QRIt (x) is color characteristic of the reference picture in two channels I, Q respectively, ID(x) and QDIt (x) is color characteristic of the distorted image in two channels I, Q respectively, N is pixel sum, C4(C4=200) It is a normal amount, for stablizing denominator, avoiding the occurrence of is zero;
Step 7.2, through being calculated according to the color similarity for having obtained two chrominance channels of I and Q after the completion of step 7.1 The global color similitude of reference picture and distorted image, calculation method are as follows:
In formula (21), N is pixel sum, and x is pixel position.
As a further solution of the present invention, step 8 specifically includes following steps:
Step 8.1, after the completion of through step 4, step 5, step 6 and step 7, to nine similarity feature CC of acquisitionLS,SL, SM, SHAnd SC, in conjunction with the subjective average mark MOS value of distorted image in database, by it Be input to the regression model of random forest RF foundation jointly and be trained, and the quantity ntree=of decision tree in model is set 500, several sections of point pre-selection variable number mtry=3;
Step 8.2, after the completion of step 8.1, utilization trained regression model, by one or more distortion map to be detected Similarity feature first is extracted according to step 4, step 5, step 6 and step 7 with corresponding reference picture, then similitude is special Sign is input in trained RF regression model, the forecast quality score exported, to complete to distorted image quality Evaluation.
Compared with prior art, the present invention has the following advantages that:
(1) the present invention is based on the full-reference image quality evaluating methods that airspace combines frequency-domain analysis, in the open number of large size It is used to complement one another to describe image structure information according to four kinds of different similarity features of image are extracted on library, solves traditional characteristic The problem low with human eye subjective perception consistency;
(2) a kind of full-reference image quality evaluating method that frequency-domain analysis is combined based on airspace of the present invention, being capable of basis The regression model that random forest RF is established merges each similarity feature and subjective scores MOS value is learnt and predicted, improves The robustness of model, to increase widespread popularity;
(3) present invention is a kind of combines the full-reference image quality evaluating method of frequency-domain analysis in use, energy based on airspace Image quality estimation precision is increased substantially, and there is high consistency with human visual system.
Detailed description of the invention
Fig. 1 is the frame diagram that the full-reference image quality evaluating method of frequency-domain analysis is combined the present invention is based on airspace.
Specific embodiment
The present invention is further elaborated in the following with reference to the drawings and specific embodiments.
A kind of full-reference image quality evaluating method in combination airspace and frequency-domain analysis of the present invention, as shown in Figure 1, can incite somebody to action It is divided into two large divisions, respectively:The foundation of RF model and the prediction of picture quality.
RF model establishes part, and process object is reference picture and distorted image in image data base, extracts this hair Four kinds of similarity features in bright, the subjective MOS value in combined data library, establish regression model using random forest RF.
Image quality estimation part, calculated distortion image and the global max architecture of corresponding reference picture, texture, space These four similarity features are fashioned into a 9-D feature vector by frequency and color similarity, and are input to RF recurrence Model is as input value, so that the quality to distorted image is predicted, picture quality is evaluated in completion.
A kind of full-reference image quality evaluating method that frequency-domain analysis is combined based on airspace of the present invention, specifically according to following Step is implemented:
Step 1, in database reference picture and distorted image carry out color space conversion, by RGB color turn YIQ color space is changed to, is specifically implemented according to the following steps:
In formula (1):The image size before obtained image size and color space conversion after color space conversion It is identical, realize the channel luminance information Y and chrominance information I, the Q channel separation of image.
Step 2, through after conversion in the channel Y of color space, extracting reference picture and distorted image after the completion of step 1 Normalized gradient figure, be specifically implemented according to the following steps:
Step 2.1, right respectively in the channel Y using the Prewitt operator with 3*3 window level and vertical two components Reference picture and distorted image carry out convolution algorithm, carry out the extraction of structure feature;
It wherein, is the coordinate of pixel in image for piece image f (x), x, it is as follows to the method for image convolution:
In formula (2):Gx(x) horizontal gradient range value, G are indicatedy(x) vertical gradient range value is indicated.
Step 2.2, through calculating separately the gradient amplitude of reference picture and distorted image according to formula (3) after the completion of step 2.1 Value, specific algorithm are as follows:
In formula (3):G indicates the gradient magnitude of a sub-picture.
Step 2.3, through after the completion of step 2.2, normalized gradient range value, method is as follows:
Gnorm=G/Gmax(4);
In formula (4):GmaxIndicate the maximum value of gradient magnitude, GnormIndicate normalized gradient magnitude.
Step 3, through after conversion in the channel Y of color space, extracting reference picture and distorted image after the completion of step 1 Phase information, be specifically implemented according to the following steps:
Step 3.1, convolution algorithm is carried out using Y channel of the frequency domain log-Gabor filter to image.For piece image F (x, y) (reference picture or distorted image), x, y be image in pixel coordinate, its 2D-Log Gabor transformation be expressed as Lower form:
In formula (5):ω is frequency, and ψ is phase, σxAnd σyRespectively the window width of horizontal direction and vertical direction, d are Zoom factor, in order to ensure
Step 3.2, through being converted to 2D-Log Gabor after the completion of step 3.1, using forming orthogonal Even (en) With Odd (on) balanced-filter calculates response energy, in direction θjOn response energy balane it is as follows:
In formula (6):N is scale size;
Local amplitude value is calculated again, is calculated by formula (7):
In formula (7):It is local amplitude value,Even symmetric filter response,It is odd symmetry filter The response of wave device.
Step 3.3, through calculating frequency domain phase information after the completion of step 3.2, method is as follows:
In formula (8):N is scale size, and j is direction size, and λ=0.0001 is zero for preventing denominator from occurring.
Step 4, the normalized gradient feature through being obtained after the completion of step 2 and step 3 and phase property come establish it is global most Big structure figure, is specifically implemented according to the following steps:
Step 4.1, global max architecture figure is established according to the normalized gradient feature of acquisition and phase property, is chosen every The larger value of phase size and normalized gradient size on one pixel position is max architecture characteristic point, and specific method is such as Under:
LS (i, j)={ Gnorm(i,j),PC(i,j)} (9);
In formula (9):(i, j) is the position of pixel on a sub-picture, and LS (i, j) is global max architecture figure.
Step 4.2, through utilizing the reference picture overall situation max architecture figure LS of acquisition after the completion of step 4.1riAnd distorted image Global max architecture figure LSdCalculate global max architecture similitude, the specific method is as follows:
In formula (10):LSrIt is the mean value of reference picture overall situation max architecture figure, LSdIt is distorted image overall situation max architecture figure Mean value, N is image slices vegetarian refreshments sum.
Step 5, through according to the color space after conversion, reference picture and distortion are extracted in the channel Y after the completion of step 1 The texture information of image, is specifically implemented according to the following steps:
Step 5.1, the log-Gabor filter according to formula (5) is chosen with 4 scales and comprising 0 °, 45 °, 90 °, the filter group of 135 ° of four directions carries out convolution to image, and calculates the complex matrix that convolution generates under each scale Amplitude, i.e. energy diagram, method be as follows:
In formula (11):The size of a, b representing matrix,It is the real part of complex matrix,It is complex matrix Imaginary part, Ea,b(x, y) is the amplitude of complex matrix, i.e. energy diagram.
Step 5.2, through calculating the similitude between two images using chi-Square measure after the completion of step 5.1.
For the energy diagram E under s-th of scale of reference pictureR(i) and s-th of scale of distorted image under energy diagram ED(i) Between similitude be following form:
In formula (12):I is the position of pixel,It is reference picture and distorted image under s-th of scale Texture paging.
Step 6, through according to the color space after conversion, reference picture and distortion are extracted in the channel Y after the completion of step 1 The spatial frequency features of image, are specifically implemented according to the following steps:
Step 6.1, reference picture and distorted image are transformed into the channel Y discrete cosine domain, and count three frequency ranges Corresponding discrete cosine domain coefficient down.
For low frequency region RL, mid-frequency region RMWith high-frequency region RHDiscrete cosine coefficient statistical form it is as follows:
Formula (13), in formula (14) and formula (15):P (u, v) is discrete cosine coefficient, and (u, v) is pixel position.
Step 6.2, it after the completion of step 6.1, is counted respectively according to the discrete cosine coefficient in the low, medium and high frequency domain of acquisition The spatial frequency similitude of reference picture and distorted image is calculated, method is as follows:
Formula (16), in (17) and (18):φRL, φRMAnd φRHIt is the discrete cosine coefficient of the low, medium and high frequency of reference picture, φDL, φDMAnd φDHIt is the discrete cosine coefficient in the low, medium and high frequency domain of distorted image, N is pixel sum, C1(C1= 0.6), C2(C2=2000) and C3(C3It=1.7) is normal amount, for stablizing denominator, avoiding the occurrence of is zero.
Step 7, through according to the color space after conversion, calculating separately reference picture and distorted image existing after the completion of step 1 The color similarity of two chrominance channels of I and Q, is specifically implemented according to the following steps:
Step 7.1, for reference picture in the channel I (Q) any pixel point position x and distorted image in the channel I (Q) The color similarity of respective pixel point position x is by following form:
In formula (19) and (20):IR(x) and QRIt (x) is color characteristic of the reference picture in the channel I and Q, I respectivelyD(x) And QDIt (x) is color characteristic of the distorted image in the channel I and Q respectively, N is pixel sum, C4(C4=200) be one just Constant, for stablizing denominator, avoiding the occurrence of is zero.
Step 7.2, through being calculated according to the color similarity for having obtained two chrominance channels of I and Q after the completion of step 7.1 The global color similitude of reference picture and distorted image, calculation method are as follows:
In formula (21), N is pixel sum, and x is pixel position.
Step 8, after the completion of through step 4, step 5, step 6 and step 7, in conjunction with nine similarity feature CC of acquisitionLS,SL, SM, SH, SCIt is trained with subjective scores MOS value to establish regression model, and utilizes model Testing image quality predicted, is specifically implemented according to the following steps:
Step 8.1, to nine similarity feature CC of acquisitionLS,SL, SM, SHAnd SC, then tie They are input to the recurrence mould of random forest RF foundation by the subjective average mark MOS value for closing distorted image in database jointly Type is trained, and the quantity ntree=500, several sections of point pre-selection variable number mtry=3 of decision tree in model is arranged;
Step 8.2, after the completion of step 8.1, utilization trained regression model, by one or more distortion map to be detected Similarity feature first is extracted according to step 4, step 5, step 6 and step 7 with corresponding reference picture, then similitude is special Sign is input in trained RF regression model, the forecast quality score exported, to complete to distorted image quality Evaluation.
The present invention is based on the full-reference image quality evaluating methods that airspace combines frequency-domain analysis, execute from function: It is first carried out to the reference and distorted image progress color space conversion in database;Secondly it executes and extracts reference picture and distortion The airspace gradient of image and frequency domain phase property calculate global max architecture characteristic similarity;Then it executes and calculates frequency domain texture With spatial frequency features similitude, airspace color characteristic similitude, and one is constituted in conjunction with global max architecture characteristic similarity A 9-D feature vector;It is trained followed by random forest RF binding characteristic vector sum MOS value to establish regression model; The 9-D feature vector for finally executing extraction testing image treats mapping as the input value of random forest RF regression model Image quality amount carries out high-precision forecast, to evaluate picture quality.
The present invention takes full advantage of the four kind similarity features consistent with human-eye visual characteristic, can be according in database Reference picture and distorted image, establish random forest RF regression model fusion similarity feature, and be trained and predict, from And high-precision forecast image quality evaluation, higher consistency is kept with eye recognition.
The above is present pre-ferred embodiments, for the ordinary skill in the art, according to the present invention Introduction, in the case where not departing from the principle of the present invention and spirit, changes, modifications, replacement and change that embodiment is carried out Type is still fallen within protection scope of the present invention.

Claims (9)

1. combining the full-reference image quality evaluating method of frequency-domain analysis based on airspace, which is characterized in that include the following steps:
Step 1, in database reference picture and distorted image be transformed into YIQ color space by RGB color, make figure The colouring information of picture is separated with luminance information;
Step 2, through after the completion of step 1, according to the YIQ color space of acquisition, it is logical in Y to extract reference picture and distorted image respectively The airspace gradient amplitude feature in road, and be normalized;
Step 3, through after the completion of step 1, successively extract the frequency domain phase property in reference picture and distorted image in the channel Y;
Step 4, the normalized gradient amplitude characteristic and phase property obtained using step 2 and step 3, in each pixel point The larger value of two features of reservation is set to establish global structure figure, and calculates the global structure of reference picture and corresponding distorted image Similitude;
Step 5, through after the completion of step 1, successively extract the texture maps of reference picture and distorted image in the channel Y, and calculate ginseng Examine the texture paging of image and distorted image;
Step 6, through after the completion of step 1, in the channel Y reference picture and distorted image extract spatial frequency features respectively, and Calculate the spatial frequency similitude of reference picture and distorted image;
Step 7, through after the completion of step 1, calculate coloration phase between reference picture and distorted image in the channel I and the channel Q again respectively Like property, and the method by being multiplied obtains color similarity;
Step 8, through step 4, step 5, step 6 and step 7 after the completion of, melted by each characteristic similarity that random forest will acquire It closes in regression model, and subjective assessment score MOS value is also entered into regression model and is trained, trained model is straight Connect the quality for accurately predicting image to be evaluated.
2. the full-reference image quality evaluating method according to claim 1 for combining frequency-domain analysis based on airspace, special Sign is that step 1 specifically includes following steps:
To the reference picture and distorted image progress color space conversion in database, YIQ color is transformed by RGB color Space;
For the sub-picture in database, its color space conversion is expressed as form:
In formula (1):Obtained image size after color space conversion is identical as the image size before color space conversion, Realize the channel luminance information Y and chrominance information I, the Q channel separation of image.
3. the full-reference image quality evaluating method according to claim 1 for combining frequency-domain analysis based on airspace, special Sign is that step 2 specifically includes following steps:
Step 2.1, through according to obtained YIQ color space, extracting reference picture and distorted image respectively in Y after the completion of step 1 The airspace gradient amplitude feature in channel, the specific method is as follows:
Using the Prewitt operator with 3*3 window level and vertical two components respectively to reference picture and distorted image into Row convolution algorithm carries out the extraction of structure feature;
It wherein, is the coordinate of pixel in image for piece image f (x), x, it is as follows to the method for image convolution:
In formula (2):Gx(x) horizontal gradient range value, G are indicatedy(x) vertical gradient range value is indicated;
Step 2.2, through calculating separately the gradient magnitude of reference picture and distorted image according to formula (3) after the completion of step 2.1, Specific algorithm is as follows:
In formula (3):G indicates the gradient magnitude of a sub-picture;
Step 2.3, through after the completion of step 2.2, normalized gradient range value, method is as follows:
Gnorm=G/Gmax(4);
In formula (4):GmaxIndicate the maximum value of gradient magnitude, GnormIndicate normalized gradient magnitude.
4. the full-reference image quality evaluating method according to claim 1 for combining frequency-domain analysis based on airspace, special Sign is that step 3 specifically includes following steps:
Step 3.1, through according to obtained YIQ color space, extracting reference picture and distorted image respectively in Y after the completion of step 1 The frequency domain phase property in channel;
Phase property extraction is to carry out convolution algorithm to image in a frequency domain using log-Gabor filter;For piece image f (x, y) (reference picture or distorted image), x, y be image in pixel coordinate, its 2D-Log Gabor transformation be expressed as Lower form:
In formula (5):ω is frequency, and ψ is phase, σxAnd σyThe respectively window width of horizontal direction and vertical direction, d are scalings The factor, in order to ensure
Step 3.2, through being converted to 2D-Log Gabor after the completion of step 3.1, using forming orthogonal Even (en) and Odd (on) balanced-filter calculates response energy, in direction θjOn response energy balane it is as follows:
In formula (6):N is scale size;
Local amplitude value is calculated again, is calculated by formula (7):
In formula (7):It is local amplitude value,Even symmetric filter response,It is that odd symmetric filter is rung It answers;
Step 3.3, through calculating frequency domain phase information after the completion of step 3.2, method is as follows:
In formula (8):N is scale size, and j is direction size, and λ=0.0001 is zero for preventing denominator from occurring.
5. the full-reference image quality evaluating method according to claim 1 for combining frequency-domain analysis based on airspace, special Sign is that step 4 specifically includes following steps:
Step 4.1, the overall situation is established most according to the normalized gradient feature and phase property that obtain after the completion of step 2 and step 3 Big structure figure, the larger value for choosing the phase size and normalized gradient size on each pixel position is that max architecture is special Point is levied, the specific method is as follows:
LS (i, j)={ Gnorm(i,j),PC(i,j)} (9);
In formula (9):(i, j) is the position of pixel on a sub-picture, and LS (i, j) is global max architecture figure;
Step 4.2, through utilizing the reference picture overall situation max architecture figure LS of acquisition after the completion of step 4.1riIt is complete with distorted image Office max architecture figure LSdCalculate global max architecture similitude, the specific method is as follows:
In formula (10):It is the mean value of reference picture overall situation max architecture figure,It is distorted image overall situation max architecture figure Mean value, N are image slices vegetarian refreshments sums.
6. the full-reference image quality evaluating method according to claim 1 for combining frequency-domain analysis based on airspace, special Sign is that step 5 specifically includes following steps:
Step 5.1, through according to obtained YIQ color space, extracting ginseng respectively using log-Gabor filter after the completion of step 1 Image and distorted image are examined in the frequency domain textural characteristics in the channel Y;
According to log-Gabor filter described in formula (5), choosing has 4 scales and includes 0 °, 45 °, 90 °, 135 ° four The filter group in a direction carries out convolution to image, and calculates the amplitude for the complex matrix that convolution generates under each scale, i.e. energy Figure, method are as follows:
In formula (11):The size of a, b representing matrix,It is the real part of complex matrix,It is the imaginary part of complex matrix, Ea,b(x, y) is the amplitude of complex matrix, i.e. energy diagram;
Step 5.2, through calculating the similitude between two images using chi-Square measure after the completion of step 5.1;
For the energy diagram E under s-th of scale of reference pictureR(i) and s-th of scale of distorted image under energy diagram ED(i) between Similitude be following form:
In formula (12):I is the position of pixel,It is the texture phase of reference picture and distorted image under s-th of scale Like property.
7. the full-reference image quality evaluating method according to claim 1 for combining frequency-domain analysis based on airspace, special Sign is that step 6 specifically includes following steps:
Step 6.1, after the completion of step 1, reference picture and distorted image are transformed into discrete cosine domain in the channel Y, and unite Count corresponding discrete cosine domain coefficient under three frequency ranges;
For low frequency region RL, mid-frequency region RMWith high-frequency region RHDiscrete cosine coefficient statistical it is as follows:
Formula (13), in (14) and formula (15):P (u, v) is discrete cosine coefficient, and (u, v) is pixel position;
Step 6.2, after the completion of step 6.1, reference is calculated according to the discrete cosine coefficient in the low, medium and high frequency domain of acquisition The spatial frequency similitude of image and distorted image, method are as follows:
Formula (16), in (17) and (18):φRL, φRMAnd φRHReference picture low frequency respectively, intermediate frequency and high-frequency region it is discrete Cosine coefficient, φDL, φDMAnd φDHIt is distorted image low frequency respectively, the discrete cosine coefficient of intermediate frequency and high-frequency region, N is pixel Point sum, C1(C1=0.6), C2(C2=2000) and C3(C3=1.7) be normal amount, for stablizing denominator, avoid the occurrence of for Zero.
8. the full-reference image quality evaluating method according to claim 1 for combining frequency-domain analysis based on airspace, special Sign is that step 7 specifically includes following steps:
Step 7.1, through according to obtained YIQ color space, calculating separately reference picture and distorted image in I after the completion of step 1 With the chromaticity of two chrominance channels of Q;
For reference picture, any pixel selects position x and corresponding the distorted image corresponding vegetarian refreshments in the channel I (Q) in the channel I (Q) The color characteristic similitude of position x is by following form:
In formula (19) and formula (20):IR(x) and QRIt (x) is color characteristic of the reference picture in two channels I, Q, I respectivelyD(x) And QDIt (x) is color characteristic of the distorted image in two channels I, Q respectively, N is pixel sum, C4(C4It=200) is one Normal amount, for stablizing denominator, avoiding the occurrence of is zero;
Step 7.2, through calculating reference according to the color similarity for having obtained two chrominance channels of I and Q after the completion of step 7.1 The global color similitude of image and distorted image, calculation method are as follows:
In formula (21), N is pixel sum, and x is pixel position.
9. the full-reference image quality evaluating method according to claim 1 for combining frequency-domain analysis based on airspace, special Sign is that step 8 specifically includes following steps:
Step 8.1, after the completion of through step 4, step 5, step 6 and step 7, to nine similarity feature CC of acquisitionLS,SL, SM, SHAnd SC, in conjunction with the subjective average mark MOS value of distorted image in database, by it Be input to the regression model of random forest RF foundation jointly and be trained, and the quantity ntree=of decision tree in model is set 500, several sections of point pre-selection variable number mtry=3;
Step 8.2, after the completion of step 8.1, utilization trained regression model, by one or more distortion map to be detected and with Its corresponding reference picture first extracts similarity feature according to step 4, step 5, step 6 and step 7, then similarity feature is defeated Enter into trained RF regression model, the forecast quality score exported, to complete the evaluation to distorted image quality.
CN201810207410.1A 2018-03-14 2018-03-14 Full-reference image quality evaluation method based on spatial domain combined frequency domain analysis Active CN108830823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810207410.1A CN108830823B (en) 2018-03-14 2018-03-14 Full-reference image quality evaluation method based on spatial domain combined frequency domain analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810207410.1A CN108830823B (en) 2018-03-14 2018-03-14 Full-reference image quality evaluation method based on spatial domain combined frequency domain analysis

Publications (2)

Publication Number Publication Date
CN108830823A true CN108830823A (en) 2018-11-16
CN108830823B CN108830823B (en) 2021-10-26

Family

ID=64154589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810207410.1A Active CN108830823B (en) 2018-03-14 2018-03-14 Full-reference image quality evaluation method based on spatial domain combined frequency domain analysis

Country Status (1)

Country Link
CN (1) CN108830823B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276744A (en) * 2019-05-15 2019-09-24 北京航空航天大学 The assessment method and device of image mosaic quality
CN111126493A (en) * 2019-12-25 2020-05-08 东软睿驰汽车技术(沈阳)有限公司 Deep learning model training method and device, electronic equipment and storage medium
CN111127407A (en) * 2019-12-11 2020-05-08 北京航空航天大学 Fourier transform-based style migration counterfeit image detection device and method
CN111461181A (en) * 2020-03-16 2020-07-28 北京邮电大学 Vehicle fine-grained classification method and device
CN112950597A (en) * 2021-03-09 2021-06-11 深圳大学 Distorted image quality evaluation method and device, computer equipment and storage medium
CN114067006A (en) * 2022-01-17 2022-02-18 湖南工商大学 Screen content image quality evaluation method based on discrete cosine transform
CN114240849A (en) * 2021-11-25 2022-03-25 电子科技大学 Gradient change-based no-reference JPEG image quality evaluation method
CN117275080A (en) * 2023-11-22 2023-12-22 深圳市美爱堂科技有限公司 Eye state identification method and system based on computer vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008115405A2 (en) * 2007-03-16 2008-09-25 Sti Medicals Systems, Llc A method of image quality assessment to procuce standardized imaging data
CN101489130A (en) * 2009-01-21 2009-07-22 西安交通大学 Complete reference image quality assessment method based on image edge difference statistical characteristic
WO2013148566A1 (en) * 2012-03-26 2013-10-03 Viewdle, Inc. Image blur detection
CN106920232A (en) * 2017-02-22 2017-07-04 武汉大学 Gradient similarity graph image quality evaluation method and system based on conspicuousness detection
CN107610093A (en) * 2017-08-02 2018-01-19 西安理工大学 Full-reference image quality evaluating method based on similarity feature fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008115405A2 (en) * 2007-03-16 2008-09-25 Sti Medicals Systems, Llc A method of image quality assessment to procuce standardized imaging data
CN101489130A (en) * 2009-01-21 2009-07-22 西安交通大学 Complete reference image quality assessment method based on image edge difference statistical characteristic
WO2013148566A1 (en) * 2012-03-26 2013-10-03 Viewdle, Inc. Image blur detection
CN106920232A (en) * 2017-02-22 2017-07-04 武汉大学 Gradient similarity graph image quality evaluation method and system based on conspicuousness detection
CN107610093A (en) * 2017-08-02 2018-01-19 西安理工大学 Full-reference image quality evaluating method based on similarity feature fusion

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
AZEDDUBE BEGHDADI,ET AL: "《Image Quality Assessment Using the Joint Spatial/Spatial-Frequency Representation》", 《EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING》 *
GUANGYAO CAO,ET AL。: "《Image Quality Assessment Using Spatial Frequency Component》", 《PCM 2009》 *
王俊平,等: "《基于图像非平坦区域DCT特性和EGRNN的盲图像质量评价》", 《计算机学报》 *
胡安洲: "《主客观一致的图像感知质量评价方法研究》", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276744B (en) * 2019-05-15 2021-10-26 北京航空航天大学 Image splicing quality evaluation method and device
CN110276744A (en) * 2019-05-15 2019-09-24 北京航空航天大学 The assessment method and device of image mosaic quality
CN111127407A (en) * 2019-12-11 2020-05-08 北京航空航天大学 Fourier transform-based style migration counterfeit image detection device and method
CN111127407B (en) * 2019-12-11 2022-06-28 北京航空航天大学 Fourier transform-based style migration forged image detection device and method
CN111126493A (en) * 2019-12-25 2020-05-08 东软睿驰汽车技术(沈阳)有限公司 Deep learning model training method and device, electronic equipment and storage medium
CN111126493B (en) * 2019-12-25 2023-08-01 东软睿驰汽车技术(沈阳)有限公司 Training method and device for deep learning model, electronic equipment and storage medium
CN111461181A (en) * 2020-03-16 2020-07-28 北京邮电大学 Vehicle fine-grained classification method and device
CN112950597A (en) * 2021-03-09 2021-06-11 深圳大学 Distorted image quality evaluation method and device, computer equipment and storage medium
CN112950597B (en) * 2021-03-09 2022-03-08 深圳大学 Distorted image quality evaluation method and device, computer equipment and storage medium
CN114240849B (en) * 2021-11-25 2023-03-31 电子科技大学 Gradient change-based no-reference JPEG image quality evaluation method
CN114240849A (en) * 2021-11-25 2022-03-25 电子科技大学 Gradient change-based no-reference JPEG image quality evaluation method
CN114067006A (en) * 2022-01-17 2022-02-18 湖南工商大学 Screen content image quality evaluation method based on discrete cosine transform
CN114067006B (en) * 2022-01-17 2022-04-08 湖南工商大学 Screen content image quality evaluation method based on discrete cosine transform
CN117275080A (en) * 2023-11-22 2023-12-22 深圳市美爱堂科技有限公司 Eye state identification method and system based on computer vision

Also Published As

Publication number Publication date
CN108830823B (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN108830823A (en) The full-reference image quality evaluating method of frequency-domain analysis is combined based on airspace
CN111488756B (en) Face recognition-based living body detection method, electronic device, and storage medium
WO2017092431A1 (en) Human hand detection method and device based on skin colour
CN106920232B (en) Gradient similarity graph image quality evaluation method and system based on conspicuousness detection
CN108229288B (en) Neural network training and clothes color detection method and device, storage medium and electronic equipment
CN109191428A (en) Full-reference image quality evaluating method based on masking textural characteristics
CN104021545A (en) Full-reference color image quality evaluation method based on visual saliency
CN106650615B (en) A kind of image processing method and terminal
CN101853504A (en) Image quality evaluating method based on visual character and structural similarity (SSIM)
CN106056155A (en) Super-pixel segmentation method based on boundary information fusion
CN103366178A (en) Method and device for carrying out color classification on target image
CN108961227A (en) A kind of image quality evaluating method based on airspace and transform domain multiple features fusion
CN107506738A (en) Feature extracting method, image-recognizing method, device and electronic equipment
Yang et al. FG-GAN: a fine-grained generative adversarial network for unsupervised SAR-to-optical image translation
CN112184672A (en) No-reference image quality evaluation method and system
CN106683074B (en) A kind of distorted image detection method based on haze characteristic
Chang et al. Semantic-relation transformer for visible and infrared fused image quality assessment
CN104243970A (en) 3D drawn image objective quality evaluation method based on stereoscopic vision attention mechanism and structural similarity
CN113850748A (en) Point cloud quality evaluation system and method
CN113011506B (en) Texture image classification method based on deep fractal spectrum network
CN108510474A (en) Evaluation method, system, memory and the electronic equipment of tobacco leaf image quality
CN106960188B (en) Weather image classification method and device
CN109345520A (en) A kind of quality evaluating method of image definition
CN109741351A (en) A kind of classification responsive type edge detection method based on deep learning
CN107578406A (en) Based on grid with Wei pool statistical property without with reference to stereo image quality evaluation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant