Nothing Special   »   [go: up one dir, main page]

CN108833920A - A kind of DVC side information fusion method based on light stream and Block- matching - Google Patents

A kind of DVC side information fusion method based on light stream and Block- matching Download PDF

Info

Publication number
CN108833920A
CN108833920A CN201810563580.3A CN201810563580A CN108833920A CN 108833920 A CN108833920 A CN 108833920A CN 201810563580 A CN201810563580 A CN 201810563580A CN 108833920 A CN108833920 A CN 108833920A
Authority
CN
China
Prior art keywords
frame
side information
light stream
information
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810563580.3A
Other languages
Chinese (zh)
Other versions
CN108833920B (en
Inventor
卿粼波
熊珊珊
何小海
王正勇
荣松
滕奇志
熊淑华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201810563580.3A priority Critical patent/CN108833920B/en
Publication of CN108833920A publication Critical patent/CN108833920A/en
Application granted granted Critical
Publication of CN108833920B publication Critical patent/CN108833920B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/177Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The DVC side information fusion method based on light stream and Block- matching that the present invention provides a kind of relates generally to decoding end WZ frame side information and generates scheme.Preferable coding quality can be obtained for more gentle video sequence for moving since traditional Block- matching side information generates scheme, but when video image motion is more violent or video image group is larger, block matching algorithm is difficult to obtain a higher side information quality, and light stream can be very good to indicate the motion information between video sequence, therefore the method for the present invention proposes a kind of side information blending algorithm based on light stream and Block- matching, the experimental results showed that, method of the invention can obtain the higher side information quality of more traditional block matching algorithm, improve the whole distortion performance of system.

Description

A kind of DVC side information fusion method based on light stream and Block- matching
Technical field
The present invention relates to the video coding technique problems in field of picture communication, compile more particularly, to a kind of distributed video Side information increased quality technology in code (DVC).
Background technique
In recent years, with digital communication, the fast development of multimedia technology and the contour science and technology of wireless communication, people couple The requirement of video quality is also higher and higher, such as:Wireless sensor network, video conference, wireless monitor video, smart phone etc.. It is compared to single audio and text information, video image can provide more intuitive visual information abundant, but possess The video image of huge data volume also brings certain challenge to Video Compression coding system.It is whole for novel mobile device For end, popular feature be computing capability, memory capacity and in terms of it is limited.Distributed video coding The features such as (Distributed Video Coding, DVC) has encoder complexity low, and non fouling is high, it is compiled using independent The mode of code, combined decoding excavates the correlation between interframe and multiple source in decoding end, thus by complicated motion compensation and Estimation has moved on to decoding end from coding side, reduces the complexity of coding side, these features be the low computing capability of coding side, The Video Applications of wireless transmission poor reliability provide new effective solution.
In DVC, video sequence is alternately divided into Wyner-Ziv frame (WZ frame) and key frame (K frame), K frame using Independent code encoding/decoding mode in traditional frame can be used to assist the generation of WZ frame side information.The side information quality pair of generation Conclusive effect is played in the distortion performance of whole system.Traditional side information generation method there are several types of:Key frame Replica method, the key frame method of average, motion compensation extrapolation and motion compensated interpolation method.These traditional side informations generate scheme pair Preferable coding quality can be obtained for more gentle video sequence in moving, but when video image motion more acutely or When video image group (Group of Picture, GOP) is larger, these methods are all difficult to obtain a higher side information matter Amount.
The DVC coding framework that the present invention uses is the Wyner-Ziv video coding framework (Motion based on motor learning Learning-based transform domain Wyner-Ziv, MLWZ), MLWZ scheme is proposed in Stanford University On the basis of Wyner-Ziv encoding scheme based on DCT domain, processing is optimized to corresponding module, thought be by by The decoded information of a frequency band and the motion vector of current block neighborhood carry out progressive updating study, therefore can to a certain extent may be used To obtain higher-quality side information.However the initial side information of the MLWZ frame based on motor learning generate scheme using Motion compensated interpolation algorithm based on Block- matching, when video sequence movement is more violent or GOP is larger, this method is difficult to obtain One ideal side information quality, on the other hand, the progressive updating of side information and optimization be on the basis of initial side information into Capable, the initial edge information quality of system is higher, and the final coding efficiency of whole system will be better.Therefore the present invention is in MLWZ frame On the basis of frame, the Optic flow information generated using video interframe promotes the quality of the initial side information of MLWZ decoding end, with this into one The whole distortion performance of the raising DVC system of step.
Summary of the invention
In DVC, the side information of WZ frame is considered the pre- to one kind of current WZ frame to be decoded of decoding end acquisition It surveys, side information is more similar to original WZ frame, and the check bit position that coding side needs to be sent to decoding end is fewer, the code of system Rate is lower, and coding efficiency is higher.Therefore, in DVC, side information has for the performance of entire coded system to be determined The effect of property.
For convenience of explanation, it is firstly introduced into light stream concept:
In computer vision and field of image processing, light stream is commonly used to express the variation and motion conditions between image, light The concept of stream is to be put forward for the first time by James J.Gibson in the 1940s, for a video sequence, light stream What is represented is exactly the movement velocity and the direction of motion of each pixel of every image in video, as shown in Figure 1 for according to video sequence Arrange the light stream schematic diagram that the 1st frame and the 2nd frame generate in Foreman.When video sequence motion intense, light stream also be can be very good The motion information between image is represented, block matching algorithm side information when video sequence movement is more violent can be effectively made up The problem of quality declines.Comparative maturity, the present invention are used based on depth convolutional neural networks currently used optical flow algorithm 2.0 frame of FlowNet generate Optic flow information, a kind of DVC side information fusion method based on light stream and Block- matching specifically walks It is rapid as follows:
A. the side information based on forward and backward light stream generates
In view of initial edge information quality quality for whole system distortion performance importance, the present invention utilize to The forward and backward Optic flow information of decoding frame carrys out interpolation and generates light stream side information.GOP=2 is set, and current WZ frame to be decoded is denoted as It, It-1With It+1Respectively indicate ItForward and backward decoded key frame,WithRespectively indicate ItForward and backward light stream Side information interpolation frame, specific calculating process are as follows:
A1. I is obtained using optical flow algorithmt-1Frame is to It-1The forward direction light stream V of framef
A2. to key frame I before calculatingt-1To present frame ItLight stream Vf′:Vf' (x, y)=1/2Vf(x, y), x therein and Y respectively indicates cross, ordinate in Optic flow information, illustrates to need to handle each point one by one;
A3. the forward direction light stream side information of WZ frame is calculated, wherein round () function representation rounds up operation, x and y Respectively indicate cross, the ordinate in Optic flow information:
A4. I is obtained using optical flow algorithmt+1Frame is to It-1The backward light stream V of frameb
A5. to key frame I after calculatingt+1To present frame ItLight stream Vb′:Vb' (x, y)=1/2Vb(x, y), x therein and Y respectively indicates cross, ordinate in Optic flow information, illustrates to need to handle each point one by one;
A6. the backward light stream side information of WZ frame is calculated, x and y respectively indicate cross, ordinate in Optic flow information:
B. it is merged based on the side information of light stream and Block- matching
B1. the adjacent decoded frame I in front and back of WZ frame is utilizedt-1With It+1The side information of the WZ frame is obtained by block matching method
B2. forward direction light stream side information step A3 calculatedThe backward light stream side information that step A6 is calculatedAnd block Match the side information obtainedFusion obtains final side information:
The present invention generates current WZ frame to be decoded it is crucial that using the forward and backward Optic flow information between video sequence Light stream side information, then with based on block matching algorithm generate side information merged, finally obtain a WZ frame side information, This method video sequence move more acutely or GOP it is larger when, can also obtain the higher side information of quality, therefore, this Inventive method can effectively promote the quality of side information in DVC, while improve the binary encoding performance of DVC system.
To keep above content of the invention, feature and advantage more obvious and easy to understand, cooperate appended attached drawing below, makees detailed It is described as follows.
Detailed description of the invention
Fig. 1 is the light stream schematic diagram of video sequence Foreman of the invention, and Fig. 1 (c) is indicated according to Fig. 1 (a) and Fig. 1 (b) In video frame generate Optic flow information;
Fig. 2 is that the present invention is based on the DVC side information fusion method system block diagrams of light stream and Block- matching;
Fig. 3 is the cavity point padding scheme of the invention based on neighborhood search;
Fig. 4 is the experiment pair of the method for the present invention and original motion Learning Scheme and motion compensated interpolation method side information scheme Compare result, wherein Fig. 4 (a) is the experimental result curve of video sequence Foreman, and Fig. 4 (b) is the reality of video sequence Soccer Test result curve.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and embodiments, it is necessary to, it is noted that below Embodiment is served only for that the present invention is described further, should not be understood as limiting the scope of the invention, fields Personnel be skillful at according to foregoing invention content, some nonessential modifications and adaptations are made to the present invention and are embodied, Protection scope of the present invention should be still fallen within.
It is illustrated in figure 2 the system block diagram of the method for the present invention, specific implementation method is:
(1) in DVC coding side, original video sequence is alternately divided into WZ frame and K frame, K frame using it is traditional H.264/AVC Intraframe coding method coding, WZ frame by dct transform, quantization, bit-plane extract after, using LDPCA channel coding method into Row coding;
(2) in DVC decoding end, K frame is decoded using decoding process in conventional frame.Decoding for WZ frame, first according to Lower step generates the forward and backward light stream side information of current WZ frame:
GOP=2 is set, and current WZ frame to be decoded is denoted as It, It-1With It+1Respectively indicate ItForward and backward decoded Key frame,WithRespectively indicate ItForward and backward light stream side information interpolation frame, specific calculating process is as follows:
A1. I is obtained using optical flow algorithmt-1Frame is to It-1The forward direction light stream V of framef
A2. to key frame I before calculatingt-1To present frame ItLight stream Vf′:Vf' (x, y)=1/2Vf(x, y), x therein and Y respectively indicates cross, ordinate in Optic flow information, illustrates to need to handle each point one by one;
A3. the forward direction light stream side information of WZ frame is calculated, wherein round () function representation rounds up operation, x and y Respectively indicate cross, the ordinate in Optic flow information:
A4. I is obtained using optical flow algorithmt+1Frame is to It-1The backward light stream V of frameb
A5. to key frame I after calculatingt+1To present frame ItLight stream Vb′:Vb' (x, y)=1/2Vb(x, y), x therein and Y respectively indicates cross, ordinate in Optic flow information, illustrates to need to handle each point one by one;
A6. the backward light stream side information of WZ frame is calculated, x and y respectively indicate cross, ordinate in Optic flow information:
(3) then the side information of light stream and Block- matching is merged according to following steps, obtains final WZ frame side letter Breath:
B1. the adjacent decoded frame I in front and back of WZ frame is utilizedt- 1 and It+1The side letter of the WZ frame is obtained by block matching method Breath
B2. forward direction light stream side information step A3 calculatedThe backward light stream side information that step A6 is calculatedAnd block Match the side information obtainedFusion obtains final side information:
(4) auxiliary decoder is carried out to WZ frame using fused WZ frame side information, obtains the decoding frame of WZ frame.
Specifically, in step A1, A4, although traditional optical flow algorithm can preferably indicate the fortune of video image Dynamic characteristic, however its calculating time complexity is higher, therefore is not suitable for being applied to the higher occasion of requirement of real-time.The present invention Optic flow information is generated using based on 2.0 frame of FlowNet of depth convolutional neural networks, can not only be obtained high-quality The Optic flow information of amount, while having in speed and being obviously improved.
Specifically, in step A3, A6 calculating process, it is possible that occlusion phenomena, i.e., forThe same pixel position It sets, in It+1Inside have multiple pixels to be corresponding to it, processing mode of the invention is met by finding | It+1(x,y)-It-1(x+ Vb(x),y+Vb(y)) | value is the smallest to choose result to be final.It is possible that empty point, i.e., corresponding with occlusion phenomena is ForSome location of pixels, in It-1Interior to be corresponding to it without any pixel, the present invention uses simple neighborhood padding scheme pair In cavity, point is filled, as shown in Figure 3, it is assumed that black block position represented is preceding to light stream side information interpolation frameIn some Pixel cavity point, what white blocks represented is pixel cavity point in its neighborhood, and what grey block represented is then the non-cavity in its neighborhood Point successively traverses by the way of from inside to outside, until finding to find the filling point of current pixel cavity point in its neighborhood Until first non-cavity point, and as final filling result.
In order to verify the effective of the DVC side information integration program based on light stream and Block- matching that the present invention program is proposed Property, experimental analysis is carried out by taking Foreman and Soccer video sequence as an example, by the original fortune of the method for the present invention and MLWZ frame The side information scheme that dynamic Learning Scheme and motion compensated interpolation method generate is decoded the comparison of WZ frame PSNR value, test video Format Series Lines are (176 × 144) QCIF, and frame per second 15fps, test frame number is 150 frames, are only surveyed to luminance component Examination, it be the quantization parameter of 25, WZ frame is Q8 that GOP, which is dimensioned to the quantization parameter Iqp of 2, K frame,.Experimental result as shown in figure 4, What abscissa represented is video frame number, and what ordinate represented is PSNR value.PSNR value is higher, indicates the quality of video reconstruction frames Better.As can be seen that it is compared to interpolation side information method and original motion Learning Scheme, the high intensity movements in Foreman and Soccer sequence, the side information PSNR value that the method for the present invention generates has different degrees of promotion, and performance is more Stablize, it is particularly evident especially for the second-rate video frame promotion effect of side information in interpolation side information algorithm, it is main former Because being the kinetic characteristic that can preferably indicate video image based on the side information of light stream and Block- matching fusion generation scheme, simultaneously It is realized using the coding mode of motor learning and the purpose updated by frequency band is carried out to side information.Therefore, it is compared to typical Interpolation side information algorithm and original motor learning scheme, the side information prioritization scheme that the method for the present invention is proposed can obtain more High side information quality.

Claims (3)

1. a kind of DVC side information fusion method based on light stream and Block- matching, it is characterised in that mainly include following procedure step:
A. the side information based on forward and backward light stream generates
In view of the quality of initial edge information quality is for the importance of whole system distortion performance, the present invention utilizes to be decoded The forward and backward Optic flow information of frame carrys out interpolation and generates light stream side information.GOP=2 is set, and current WZ frame to be decoded is denoted as It, It-1 With It+1Respectively indicate ItForward and backward decoded key frame,WithRespectively indicate ItForward and backward light stream side letter Interpolation frame is ceased, specific calculating process is as follows:
A1. I is obtained using optical flow algorithmt-1Frame is to It-1The forward direction light stream V of framef
A2. to key frame I before calculatingt-1To present frame ItLight stream V 'f:V′f(x, y)=1/2Vf(x, y), x and y difference therein It indicates cross, the ordinate in Optic flow information, illustrates to need to handle each point one by one;
A3. the forward direction light stream side information of WZ frame is calculated, wherein round () function representation rounds up operation, x and y difference Indicate cross, the ordinate in Optic flow information:
A4. I is obtained using optical flow algorithmt+1Frame is to It-1The backward light stream V of frameb
A5. to key frame I after calculatingt+1To present frame ItLight stream V 'b:V′b(x, y)=1/2Vb(x, y), x and y difference therein It indicates cross, the ordinate in Optic flow information, illustrates to need to handle each point one by one;
A6. the backward light stream side information of WZ frame is calculated, x and y respectively indicate cross, ordinate in Optic flow information:
B. it is merged based on the side information of light stream and Block- matching
B1. the adjacent decoded frame I in front and back of WZ frame is utilizedt-1With It+1The side information of the WZ frame is obtained by block matching method
B2. forward direction light stream side information step A3 calculatedThe backward light stream side information that step A6 is calculatedAnd Block- matching The side information of acquisitionFusion obtains final side information:
2. the DVC side information fusion method based on light stream and Block- matching as described in claim 1, it is characterised in that in step A, The forward and backward Optic flow information of current WZ frame to be decoded is obtained using the optical flow algorithm based on 2.0 frame of FlowNet, utilizes light stream Information generates the forward and backward light stream side information of the WZ frame.
3. the DVC side information fusion method based on light stream and Block- matching as described in claim 1, it is characterised in that in step B Described in by preceding to light stream side informationBackward light stream side informationAnd the side information that Block- matching obtainsFusion obtains most Whole side information.
CN201810563580.3A 2018-06-04 2018-06-04 DVC side information fusion method based on optical flow and block matching Active CN108833920B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810563580.3A CN108833920B (en) 2018-06-04 2018-06-04 DVC side information fusion method based on optical flow and block matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810563580.3A CN108833920B (en) 2018-06-04 2018-06-04 DVC side information fusion method based on optical flow and block matching

Publications (2)

Publication Number Publication Date
CN108833920A true CN108833920A (en) 2018-11-16
CN108833920B CN108833920B (en) 2022-02-11

Family

ID=64143584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810563580.3A Active CN108833920B (en) 2018-06-04 2018-06-04 DVC side information fusion method based on optical flow and block matching

Country Status (1)

Country Link
CN (1) CN108833920B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822041A (en) * 2020-06-18 2021-12-21 四川大学 Deep neural network natural scene text detection method suitable for dense text
CN117470248A (en) * 2023-12-27 2024-01-30 四川三江数智科技有限公司 Indoor positioning method for mobile robot

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100142620A1 (en) * 2008-12-04 2010-06-10 Electronics And Telecommunications Research Method of generating side information by correcting motion field error in distributed video coding and dvc decoder using the same
CN101860748A (en) * 2010-04-02 2010-10-13 西安电子科技大学 Side information generating system and method based on distribution type video encoding
CN101977323A (en) * 2010-11-16 2011-02-16 上海交通大学 Method for reconstructing distributed video coding based on constraints on temporal-spatial correlation of video
CN102413381A (en) * 2011-11-21 2012-04-11 福建师范大学 Video watermark based on optical flow method and digital holography
CN102611893A (en) * 2012-03-09 2012-07-25 北京邮电大学 DMVC (distributed multi-view video coding) side-information integration method on basis of histogram matching and SAD (security association database) judgment
US20130250107A1 (en) * 2012-03-26 2013-09-26 Fujitsu Limited Image processing device, image processing method
CN103475879A (en) * 2013-09-10 2013-12-25 南京邮电大学 Side information generation method in distribution type video encoding
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method
CN105681797A (en) * 2016-01-12 2016-06-15 四川大学 Prediction residual based DVC-HEVC (Distributed Video Coding-High Efficiency Video Coding) video transcoding method
CN105939475A (en) * 2016-06-06 2016-09-14 中国矿业大学 High quality side information production method
CN106210449A (en) * 2016-08-11 2016-12-07 上海交通大学 The frame rate up-conversion method for estimating of a kind of Multi-information acquisition and system
CN107018412A (en) * 2017-04-20 2017-08-04 四川大学 A kind of DVC HEVC video transcoding methods based on key frame coding unit partition mode
CN107071447A (en) * 2017-04-06 2017-08-18 华南理工大学 A kind of correlated noise modeling method based on two secondary side information in DVC
CN107958260A (en) * 2017-10-27 2018-04-24 四川大学 A kind of group behavior analysis method based on multi-feature fusion

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100142620A1 (en) * 2008-12-04 2010-06-10 Electronics And Telecommunications Research Method of generating side information by correcting motion field error in distributed video coding and dvc decoder using the same
CN101860748A (en) * 2010-04-02 2010-10-13 西安电子科技大学 Side information generating system and method based on distribution type video encoding
CN101977323A (en) * 2010-11-16 2011-02-16 上海交通大学 Method for reconstructing distributed video coding based on constraints on temporal-spatial correlation of video
CN102413381A (en) * 2011-11-21 2012-04-11 福建师范大学 Video watermark based on optical flow method and digital holography
CN102611893A (en) * 2012-03-09 2012-07-25 北京邮电大学 DMVC (distributed multi-view video coding) side-information integration method on basis of histogram matching and SAD (security association database) judgment
US20130250107A1 (en) * 2012-03-26 2013-09-26 Fujitsu Limited Image processing device, image processing method
CN103475879A (en) * 2013-09-10 2013-12-25 南京邮电大学 Side information generation method in distribution type video encoding
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method
CN105681797A (en) * 2016-01-12 2016-06-15 四川大学 Prediction residual based DVC-HEVC (Distributed Video Coding-High Efficiency Video Coding) video transcoding method
CN105939475A (en) * 2016-06-06 2016-09-14 中国矿业大学 High quality side information production method
CN106210449A (en) * 2016-08-11 2016-12-07 上海交通大学 The frame rate up-conversion method for estimating of a kind of Multi-information acquisition and system
CN107071447A (en) * 2017-04-06 2017-08-18 华南理工大学 A kind of correlated noise modeling method based on two secondary side information in DVC
CN107018412A (en) * 2017-04-20 2017-08-04 四川大学 A kind of DVC HEVC video transcoding methods based on key frame coding unit partition mode
CN107958260A (en) * 2017-10-27 2018-04-24 四川大学 A kind of group behavior analysis method based on multi-feature fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Y. MOHAMMAD TAHERI等: "Side information generation using optical flow and block matching in Wyner-Ziv video coding", 《2014 21ST IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS AND SYSTEMS (ICECS)》 *
汪芸等: "分布式视频编码中的边信息优化算法", 《电视技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822041A (en) * 2020-06-18 2021-12-21 四川大学 Deep neural network natural scene text detection method suitable for dense text
CN113822041B (en) * 2020-06-18 2023-04-18 四川大学 Deep neural network natural scene text detection method suitable for dense text
CN117470248A (en) * 2023-12-27 2024-01-30 四川三江数智科技有限公司 Indoor positioning method for mobile robot
CN117470248B (en) * 2023-12-27 2024-04-02 四川三江数智科技有限公司 Indoor positioning method for mobile robot

Also Published As

Publication number Publication date
CN108833920B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
Wu et al. Video compression through image interpolation
CN107105278B (en) The video coding and decoding system that motion vector automatically generates
Pan et al. A low-complexity screen compression scheme for interactive screen sharing
US7479957B2 (en) System and method for scalable portrait video
CN1204757C (en) Stereo video stream coder/decoder and stereo video coding/decoding system
US20130266078A1 (en) Method and device for correlation channel estimation
CN112004085A (en) Video coding method under guidance of scene semantic segmentation result
CN113747242B (en) Image processing method, image processing device, electronic equipment and storage medium
CN109905717A (en) A kind of H.264/AVC Encoding Optimization based on Space-time domain down-sampling and reconstruction
Chen et al. Compressed domain deep video super-resolution
CN110072119A (en) A kind of perception of content video adaptive transmission method based on deep learning network
CN112465698A (en) Image processing method and device
JP2007529184A (en) Method and apparatus for compressing digital image data using motion estimation
CN108833920A (en) A kind of DVC side information fusion method based on light stream and Block- matching
Li et al. Bi-level video: Video communication at very low bit rates
Chen et al. Robust ultralow bitrate video conferencing with second order motion coherency
KR20070011351A (en) Video quality enhancement and/or artifact reduction using coding information from a compressed bitstream
CN115665427A (en) Live broadcast data processing method and device and electronic equipment
CN104902256B (en) A kind of binocular stereo image decoding method based on motion compensation
CN113822801A (en) Compressed video super-resolution reconstruction method based on multi-branch convolutional neural network
Jacob et al. Deep Learning Approach to Video Compression
CN114979711A (en) Audio/video or image layered compression method and device
US20240298001A1 (en) Group of pictures size determination method and electronic device
CN102905129B (en) Distributed coding method of still image
US20240323452A1 (en) Pixel value mapping method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant