Nothing Special   »   [go: up one dir, main page]

Video Coding Format - Wikipedia

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10
At a glance
Powered by AI
Some key takeaways are that video coding formats use compression algorithms like DCT coding and motion compensation to represent digital video, and examples include formats like H.264, HEVC, VP9, and AV1. A video coding format is documented by a technical specification.

Examples of video coding formats mentioned include H.262 (MPEG-2 Part 2), MPEG-4 Part 2, H.264 (MPEG-4 Part 10), HEVC (H.265), Theora, RealVideo RV40, VP9, and AV1.

A video coding format specification describes a standard, while a video codec is a specific software or hardware implementation that can compress or decompress video data to/from that format. There can be multiple codec implementations for a single format specification.

Video coding format

A video coding format[1][2] (or sometimes video compression format) is a content representation
format for storage or transmission of digital video content (such as in a data file or bitstream). It typically
uses a standardized video compression algorithm, most commonly based on discrete cosine transform
(DCT) coding and motion compensation. Examples of video coding formats include H.262 (MPEG-2 Part 2),
MPEG-4 Part 2, H.264 (MPEG-4 Part 10), HEVC (H.265), Theora, RealVideo RV40, VP9, and AV1. A
specific software or hardware implementation capable of compression or decompression to/from a specific
video coding format is called a video codec; an example of a video codec is Xvid, which is one of several
different codecs which implements encoding and decoding videos in the MPEG-4 Part 2 video coding format
in software.

Some video coding formats are documented by a detailed technical specification document known as a
video coding specification. Some such specifications are written and approved by standardization
organizations as technical standards, and are thus known as a video coding standard. The term
'standard' is also sometimes used for de facto standards as well as formal standards.

Video content encoded using a particular video coding format is normally bundled with an audio stream
(encoded using an audio coding format) inside a multimedia container format such as AVI, MP4, FLV,
RealMedia, or Matroska. As such, the user normally doesn't have a H.264 file, but instead has a .mp4 video
file, which is an MP4 container containing H.264-encoded video, normally alongside AAC-encoded audio.
Multimedia container formats can contain any one of a number of different video coding formats; for
example the MP4 container format can contain video in either the MPEG-2 Part 2 or the H.264 video coding
format, among others. Another example is the initial specification for the file type WebM, which specified
the container format (Matroska), but also exactly which video (VP8) and audio (Vorbis) compression format
is used inside the Matroska container, even though the Matroska container format itself is capable of
containing other video coding formats (VP9 video and Opus audio support was later added to the WebM
specification).

Contents
Distinction between "format" and "codec"
History
Motion-compensated DCT
Video coding standards
List of video coding standards
Lossless, lossy, and uncompressed video coding formats
Intra-frame video coding formats
Profiles and levels
See also
References and notes

Distinction between "format" and "codec"


Although video coding formats such as H.264 are sometimes referred to as codecs, there is a clear
conceptual difference between a specification and its implementations. Video coding formats are described
in specifications, and software or hardware to encode/decode data in a given video coding format from/to
uncompressed video are implementations of those specifications. As an analogy, the video coding format
H.264 (specification) is to the codec OpenH264 (specific implementation) what the C Programming
Language (specification) is to the compiler GCC (specific implementation). Note that for each specification
(e.g. H.264), there can be many codecs implementing that specification (e.g. x264, OpenH264,
H.264/MPEG-4 AVC products and implementations).

This distinction is not consistently reflected terminologically in the literature. The H.264 specification calls
H.261, H.262, H.263, and H.264 video coding standards and does not contain the word codec.[3] The
Alliance for Open Media clearly distinguishes between the AV1 video coding format and the accompanying
codec they are developing, but calls the video coding format itself a video codec specification.[4] The VP9
specification calls the video coding format VP9 itself a codec.[5]

As an example of conflation, Chromium's[6] and Mozilla's[7] pages listing their video format support both
call video coding formats such as H.264 codecs. As another example, in Cisco's announcement of a free-as-
in-beer video codec, the press release refers to the H.264 video coding format as a "codec" ("choice of a
common video codec"), but calls Cisco's implementation of a H.264 encoder/decoder a "codec" shortly
thereafter ("open-source our H.264 codec").[8]

A video coding format does not dictate all algorithms used by a codec implementing the format. For
example, a large part of how video compression typically works is by finding similarities between video
frames (block-matching), and then achieving compression by copying previously-coded similar subimages
(e.g., macroblocks) and adding small differences when necessary. Finding optimal combinations of such
predictors and differences is an NP-hard problem,[9] meaning that it is practically impossible to find an
optimal solution. While the video coding format must support such compression across frames in the
bitstream format, by not needlessly mandating specific algorithms for finding such block-matches and other
encoding steps, the codecs implementing the video coding specification have some freedom to optimize and
innovate in their choice of algorithms. For example, section 0.5 of the H.264 specification says that
encoding algorithms are not part of the specification.[3] Free choice of algorithm also allows different
space–time complexity trade-offs for the same video coding format, so a live feed can use a fast but space-
inefficient algorithm, while a one-time DVD encoding for later mass production can trade long encoding-
time for space-efficient encoding.

History
The concept of analog video compression dates back to 1929, when R.D. Kell in Britain proposed the
concept of transmitting only the portions of the scene that changed from frame-to-frame. The concept of
digital video compression dates back to 1952, when Bell Labs researchers B.M. Oliver and C.W. Harrison
proposed the use of differential pulse-code modulation (DPCM) in video coding. The concept of inter-frame
motion compensation dates back to 1959, when NHK researchers Y. Taki, M. Hatori and S. Tanaka proposed
predictive inter-frame video coding in the temporal dimension.[10] In 1967, University of London
researchers A.H. Robinson and C. Cherry proposed run-length encoding (RLE), a lossless compression
scheme, to reduce the transmission bandwidth of analog television signals.[11]

The earliest digital video coding algorithms were either for uncompressed video or used lossless
compression, both methods inefficient and impractical for digital video coding.[12][13] Digital video was
introduced in the 1970s,[12] initially using uncompressed pulse-code modulation (PCM) requiring high
bitrates around 45–200 Mbit/s for standard-definition (SD) video,[12][13] which was up to 2,000 times
greater than the telecommunication bandwidth (up to 100 kbit/s) available until the 1990s.[13] Similarly,
uncompressed high-definition (HD) 1080p video requires bitrates exceeding 1 Gbit/s, significantly greater
than the bandwidth available in the 2000s.[14]

Motion-compensated DCT
Practical video compression was made possible by the development of motion-compensated DCT (MC DCT)
coding,[13][12] also called block motion compensation (BMC)[10] or DCT motion compensation. This is a
hybrid coding algorithm,[10] which combines two key data compression techniques: discrete cosine
transform (DCT) coding[13][12] in the spatial dimension, and predictive motion compensation in the
temporal dimension.[10]

DCT coding is a lossy block compression transform coding technique that was first proposed by Nasir
Ahmed, who initially intended it for image compression, while he was working at Kansas State University in
1972. It was then developed into a practical image compression algorithm by Ahmed with T. Natarajan and
K. R. Rao at the University of Texas in 1973, and was published in 1974.[15][16][17]

The other key development was motion-compensated hybrid coding.[10] In 1974, Ali Habibi at the
University of Southern California introduced hybrid coding,[18][19][20] which combines predictive coding
with transform coding.[10][21] He examined several transform coding techniques, including the DCT,
Hadamard transform, Fourier transform, slant transform, and Karhunen-Loeve transform.[18] However, his
algorithm was initially limited to intra-frame coding in the spatial dimension. In 1975, John A. Roese and
Guner S. Robinson extended Habibi's hybrid coding algorithm to the temporal dimension, using transform
coding in the spatial dimension and predictive coding in the temporal dimension, developing inter-frame
motion-compensated hybrid coding.[10][22] For the spatial transform coding, they experimented with
different transforms, including the DCT and the fast Fourier transform (FFT), developing inter-frame
hybrid coders for them, and found that the DCT is the most efficient due to its reduced complexity, capable
of compressing image data down to 0.25-bit per pixel for a videotelephone scene with image quality
comparable to a typical intra-frame coder requiring 2-bit per pixel.[23][22]

The DCT was applied to video encoding by Wen-Hsiung Chen,[24] who developed a fast DCT algorithm with
C.H. Smith and S.C. Fralick in 1977,[25][26] and founded Compression Labs to commercialize DCT
technology.[24] In 1979, Anil K. Jain and Jaswant R. Jain further developed motion-compensated DCT video
compression.[27][10] This led to Chen developing a practical video compression algorithm, called motion-
compensated DCT or adaptive scene coding, in 1981.[10] Motion-compensated DCT later became the
standard coding technique for video compression from the late 1980s onwards.[12][28]

Video coding standards

The first digital video coding standard was H.120, developed by the CCITT (now ITU-T) in 1984.[29] H.120
was not usable in practice, as its performance was too poor.[29] H.120 used motion-compensated DPCM
coding,[10] a lossless compression algorithm that was inefficient for video coding.[12] During the late 1980s,
a number of companies began experimenting with discrete cosine transform (DCT) coding, a much more
efficient form of compression for video coding. The CCITT received 14 proposals for DCT-based video
compression formats, in contrast to a single proposal based on vector quantization (VQ) compression. The
H.261 standard was developed based on motion-compensated DCT compression.[12][28] H.261 was the first
practical video coding standard,[29] and was developed with patents licensed from a number of companies,
including Hitachi, PictureTel, NTT, BT, and Toshiba, among others.[30] Since H.261, motion-compensated
DCT compression has been adopted by all the major video coding standards (including the H.26x and
MPEG formats) that followed.[12][28]

MPEG-1, developed by the Motion Picture Experts Group (MPEG), followed in 1991, and it was designed to
compress VHS-quality video.[29] It was succeeded in 1994 by MPEG-2/H.262,[29] which was developed with
patents licensed from a number of companies, primarily Sony, Thomson and Mitsubishi Electric.[31] MPEG-
2 became the standard video format for DVD and SD digital television.[29] Its motion-compensated DCT
algorithm was able to achieve a compression ratio of up to 100:1, enabling the development of digital media
technologies such as video-on-demand (VOD)[13] and high-definition television (HDTV).[32] In 1999, it was
followed by MPEG-4/H.263, which was a major leap forward for video compression technology.[29] It was
developed with patents licensed from a number of companies, primarily Mitsubishi, Hitachi and
Panasonic.[33]

The most widely used video coding format as of 2019 is H.264/MPEG-4 AVC.[34] It was developed in 2003,
with patents licensed from a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG
Electronics.[35] In contrast to the standard DCT used by its predecessors, AVC uses the integer DCT.[24][36]
H.264 is one of the video encoding standards for Blu-ray Discs; all Blu-ray Disc players must be able to
decode H.264. It is also widely used by streaming internet sources, such as videos from YouTube, Netflix,
Vimeo, and the iTunes Store, web software such as the Adobe Flash Player and Microsoft Silverlight, and
also various HDTV broadcasts over terrestrial (Advanced Television Systems Committee standards, ISDB-T,
DVB-T or DVB-T2), cable (DVB-C), and satellite (DVB-S2).

A main problem for many video coding formats has been patents, making it expensive to use or potentially
risking a patent lawsuit due to submarine patents. The motivation behind many recently designed video
coding formats such as Theora, VP8 and VP9 have been to create a (libre) video coding standard covered
only by royalty-free patents.[37] Patent status has also been a major point of contention for the choice of
which video formats the mainstream web browsers will support inside the HTML5 video tag.

The current-generation video coding format is HEVC (H.265), introduced in 2013. While AVC uses the
integer DCT with 4x4 and 8x8 block sizes, HEVC uses integer DCT and DST transforms with varied block
sizes between 4x4 and 32x32.[38] HEVC is heavily patented, with the majority of patents belonging to
Samsung Electronics, GE, NTT and JVC Kenwood.[39] It is currently being challenged by the aiming-to-be-
freely-licensed AV1 format. As of 2019, AVC is by far the most commonly used format for the recording,
compression and distribution of video content, used by 91% of video developers, followed by HEVC which is
used by 43% of developers.[34]

List of video coding standards


Timeline of international video compression standards

Video Market
Basic Popular
coding Year Publisher(s) Committee(s) Licensor(s) share
algorithm implementations
standard (2019)[34]

DPCM H.120 1984 CCITT VCEG N/A N/A Un​known

Hitachi,
PictureTel,
Videoconferencing,
H.261 1988 CCITT VCEG NTT, BT, N/A
videotelephony
Toshiba,
etc.[30]

Motion
JPEG 1992 JPEG JPEG N/A N/A QuickTime
(MJPEG)

Fujitsu, IBM,
MPEG-1 Video-CD, Internet
1993 ISO, IEC MPEG Matsushita, N/A
Part 2 video
etc.[40]

H.262 /
Sony,
MPEG-2 DVD Video, Blu-ray,
ISO, IEC, ITU- Thomson,
Part 2 1995 MPEG, VCEG 29% DVB, ATSC, SVCD,
T Mitsubishi,
DCT (MPEG-2 SDTV
etc.[31]
Video)

Sony, Camcorders, digital


DV 1995 IEC IEC Un​known
Panasonic cassettes

Videoconferencing,
videotelephony,
Mitsubishi, H.320, Integrated
Hitachi, Services Digital
H.263 1996 ITU-T VCEG Un​known
Panasonic, Network
etc.[33] (ISDN),[41][42]
mobile video (3GP),
MPEG-4 Visual

MPEG-4 Mitsubishi,
Part 2 Hitachi, Internet video,
1999 ISO, IEC MPEG Un​known
(MPEG-4 Panasonic, DivX, Xvid
Visual) etc.[33]

Motion
DWT JPEG 2000 2001 JPEG[43] JPEG[44] N/A Un​known Digital cinema[45]
(MJ2)

Blu-ray, HD DVD,
HDTV (DVB,
ATSC), video
Advanced
streaming
Video Panasonic,
(YouTube, Netflix,
Coding ISO, IEC, ITU- Godo Kaisha
2003 MPEG, VCEG 91% Vimeo), iTunes
(H.264 / T IP Bridge, LG,
Store, iPod Video,
MPEG-4 etc.[35]
Apple TV,
AVC)
videoconferencing,
Flash Player,
Silverlight, VOD

Internet video, web


Theora 2004 Xiph Xiph N/A Un​known
browsers

Microsoft,
Panasonic,
Blu-ray, Internet
VC-1 2006 SMPTE SMPTE LG, Un​known
video
Samsung,
DCT etc.[46]

Apple Video production,


2007 Apple Apple Apple Un​known
ProRes post-production

High
UHD Blu-ray, DVB,
Efficiency
Samsung, ATSC 3.0, UHD
Video
ISO, IEC, ITU- GE, NTT, JVC streaming, High
Coding 2013 MPEG, VCEG 43%
T Kenwood, Efficiency Image
(H.265 /
etc.[39][47] Format, macOS
MPEG-H
High Sierra, iOS 11
HEVC)
AV1 2018 AOMedia AOMedia N/A 7% HTML5 video

Versatile
Video
Coding 2020 JVET JVET Un​known N/A N/A
(VVC /
H.266)

Lossless, lossy, and uncompressed video coding formats


Consumer video is generally compressed using lossy video codecs, since that results in significantly smaller
files than lossless compression. While there are video coding formats designed explicitly for either lossy or
lossless compression, some video coding formats such as Dirac and H.264 support both.

Uncompressed video formats, such as Clean HDMI, is a form of lossless video used in some circumstances
such as when sending video to a display over a HDMI connection. Some high-end cameras can also capture
video directly in this format.

Intra-frame video coding formats


Interframe compression complicates editing of an encoded video sequence.[48] One subclass of relatively
simple video coding formats are the intra-frame video formats, such as DV, in which each frame of the video
stream is compressed independently without referring to other frames in the stream, and no attempt is
made to take advantage of correlations between successive pictures over time for better compression. One
example is Motion JPEG, which is simply a sequence of individually JPEG-compressed images. This
approach is quick and simple, at the expense the encoded video being much larger than a video coding
format supporting Inter frame coding.

Because interframe compression copies data from one frame to another, if the original frame is simply cut
out (or lost in transmission), the following frames cannot be reconstructed properly. Making 'cuts' in
intraframe-compressed video while video editing is almost as easy as editing uncompressed video: one finds
the beginning and ending of each frame, and simply copies bit-for-bit each frame that one wants to keep,
and discards the frames one doesn't want. Another difference between intraframe and interframe
compression is that, with intraframe systems, each frame uses a similar amount of data. In most interframe
systems, certain frames (such as "I frames" in MPEG-2) aren't allowed to copy data from other frames, so
they require much more data than other frames nearby.[49]

It is possible to build a computer-based video editor that spots problems caused when I frames are edited
out while other frames need them. This has allowed newer formats like HDV to be used for editing.
However, this process demands a lot more computing power than editing intraframe compressed video with
the same picture quality. But, this compression is not very effective to use for any audio format.

Profiles and levels


A video coding format can define optional restrictions to encoded video, called profiles and levels. It is
possible to have a decoder which only supports decoding a subset of profiles and levels of a given video
format, for example to make the decoder program/hardware smaller, simpler, or faster.

A profile restricts which encoding techniques are allowed. For example, the H.264 format includes the
profiles baseline, main and high (and others). While P-slices (which can be predicted based on preceding
slices) are supported in all profiles, B-slices (which can be predicted based on both preceding and following
slices) are supported in the main and high profiles but not in baseline.[50]

A level is a restriction on parameters such as maximum resolution and data rates.[50]

See also
Comparison of container formats
Data compression#Video
List of video compression formats
Video file format

References and notes


1. The term "video coding" can be seen in e.g. the names Advanced Video Coding, High Efficiency Video
Coding, and Video Coding Experts Group
2. Thomas Wiegand; Gary J. Sullivan; Gisle Bjontegaard & Ajay Luthra (July 2003). "Overview of the
H.264 / AVC Video Coding Standard" (http://654lab.webstarts.com/uploads/csvt_overview.pdf) (PDF).
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY.
3. "SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS : Infrastructure of audiovisual services –
Coding of moving video : Advanced video coding for generic audiovisual services" (http://www.itu.int/rec/
dologin_pub.asp?lang=e&id=T-REC-H.264-200305-S!!PDF-E&type=items). Itu.int. Retrieved 6 January
2015.
4. "Front Page" (http://aomedia.org/). Alliance for Open Media. Retrieved 2016-05-23.
5. Adrian Grange; Peter de Rivaz & Jonathan Hunt. "VP9 Bitstream & Decoding Process Specification" (htt
ps://storage.googleapis.com/downloads.webmproject.org/docs/vp9/vp9-bitstream-specification-v0.6-201
60331-draft.pdf) (PDF).
6. "Audio/Video" (https://www.chromium.org/audio-video). The Chromium Projects. Retrieved 2016-05-23.
7. "Media formats supported by the HTML audio and video elements" (https://developer.mozilla.org/en-US/
docs/Web/HTML/Supported_media_formats). Mozilla. Retrieved 2016-05-23.
8. Rowan Trollope (2013-10-30). "Open-Sourced H.264 Removes Barriers to WebRTC" (https://blogs.cisco
.com/collaboration/open-source-h-264-removes-barriers-webrtc). Cisco. Retrieved 2016-05-23.
9. "Chapter 3 : Modified A* Prune Algorithm for finding K-MCSP in video compression" (http://shodhganga.i
nflibnet.ac.in/bitstream/10603/8175/8/08_chapter%203.pdf) (PDF). Shodhganga.inflibnet.ac.in.
Retrieved 2015-01-06.
10. "History of Video Compression" (https://www.itu.int/wftp3/av-arch/jvt-site/2002_07_Klagenfurt/JVT-D068.
doc). ITU-T. Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and
ITU-T SG16 Q.6). July 2002. pp. 11, 24–9, 33, 40–1, 53–6. Retrieved 3 November 2019.
11. Robinson, A. H.; Cherry, C. (1967). "Results of a prototype television bandwidth compression scheme".
Proceedings of the IEEE. IEEE. 55 (3): 356–364. doi:10.1109/PROC.1967.5493 (https://doi.org/10.1109
%2FPROC.1967.5493).
12. Ghanbari, Mohammed (2003). Standard Codecs: Image Compression to Advanced Video Coding (https:
//books.google.com/?id=7XuU8T3ooOAC&pg=PA1). Institution of Engineering and Technology. pp. 1–2.
ISBN 9780852967102.
13. Lea, William (1994). Video on demand: Research Paper 94/68 (https://researchbriefings.parliament.uk/R
esearchBriefing/Summary/RP94-68). 9 May 1994: House of Commons Library. Retrieved 20 September
2019.
14. Lee, Jack (2005). Scalable Continuous Media Streaming Systems: Architecture, Design, Analysis and
Implementation (https://books.google.com/?id=7fuvu52cyNEC&pg=PA25). John Wiley & Sons. p. 25.
ISBN 9780470857649.
15. Ahmed, Nasir (January 1991). "How I Came Up With the Discrete Cosine Transform" (https://www.scribd
.com/doc/52879771/DCT-History-How-I-Came-Up-with-the-Discrete-Cosine-Transform). Digital Signal
Ahmed, Nasir (January 1991). "How I Came Up With the Discrete Cosine Transform" (https://www.scribd
.com/doc/52879771/DCT-History-How-I-Came-Up-with-the-Discrete-Cosine-Transform). Digital Signal
Processing. 1 (1): 4–5. doi:10.1016/1051-2004(91)90086-Z (https://doi.org/10.1016%2F1051-2004%289
1%2990086-Z).
16. Ahmed, Nasir; Natarajan, T.; Rao, K. R. (January 1974), "Discrete Cosine Transform", IEEE
Transactions on Computers, C-23 (1): 90–93, doi:10.1109/T-C.1974.223784 (https://doi.org/10.1109%2F
T-C.1974.223784)
17. Rao, K. R.; Yip, P. (1990), Discrete Cosine Transform: Algorithms, Advantages, Applications, Boston:
Academic Press, ISBN 978-0-12-580203-1
18. Habibi, Ali (1974). "Hybrid Coding of Pictorial Data". IEEE Transactions on Communications. 22 (5):
614–624. doi:10.1109/TCOM.1974.1092258 (https://doi.org/10.1109%2FTCOM.1974.1092258).
19. Chen, Z.; He, T.; Jin, X.; Wu, F. (2019). "Learning for Video Compression". IEEE Transactions on
Circuits and Systems for Video Technology. 30 (2): 566–576. arXiv:1804.09869 (https://arxiv.org/abs/18
04.09869). doi:10.1109/TCSVT.2019.2892608 (https://doi.org/10.1109%2FTCSVT.2019.2892608).
20. Pratt, William K. (1984). Advances in Electronics and Electron Physics: Supplement (https://books.googl
e.com/?id=OX00AAAAIAAJ). Academic Press. p. 158. ISBN 9780120145720. "A significant advance in
image coding methodology occurred with the introduction of the concept of hybrid transform/DPCM
coding (Habibi, 1974)."
21. Ohm, Jens-Rainer (2015). Multimedia Signal Coding and Transmission (https://books.google.com/?id=e
7xnBwAAQBAJ&pg=PA364). Springer. p. 364. ISBN 9783662466919.
22. Roese, John A.; Robinson, Guner S. (30 October 1975). "Combined Spatial And Temporal Coding Of
Digital Image Sequences". Efficient Transmission of Pictorial Information. International Society for
Optics and Photonics. 0066: 172–181. Bibcode:1975SPIE...66..172R (https://ui.adsabs.harvard.edu/abs
/1975SPIE...66..172R). doi:10.1117/12.965361 (https://doi.org/10.1117%2F12.965361).
23. Huang, T. S. (1981). Image Sequence Analysis (https://books.google.com/?id=bAirCAAAQBAJ&pg=PA2
9). Springer Science & Business Media. p. 29. ISBN 9783642870378.
24. Stanković, Radomir S.; Astola, Jaakko T. (2012). "Reminiscences of the Early Work in DCT: Interview
with K.R. Rao" (http://ticsp.cs.tut.fi/reports/ticsp-report-60-reprint-rao-corrected.pdf) (PDF). Reprints from
the Early Days of Information Sciences. 60. Retrieved 13 October 2019.
25. Chen, Wen-Hsiung; Smith, C. H.; Fralick, S. C. (September 1977). "A Fast Computational Algorithm for
the Discrete Cosine Transform". IEEE Transactions on Communications. 25 (9): 1004–1009.
doi:10.1109/TCOM.1977.1093941 (https://doi.org/10.1109%2FTCOM.1977.1093941).
26. "T.81 – Digital compression and coding of continuous-tone still images – Requirements and guidelines" (
https://www.w3.org/Graphics/JPEG/itu-t81.pdf) (PDF). CCITT. September 1992. Retrieved 12 July 2019.
27. Cianci, Philip J. (2014). High Definition Television: The Creation, Development and Implementation of
HDTV Technology (https://books.google.com/?id=0mbsfr38GTgC&pg=PA63). McFarland. p. 63.
ISBN 9780786487974.
28. Li, Jian Ping (2006). Proceedings of the International Computer Conference 2006 on Wavelet Active
Media Technology and Information Processing: Chongqing, China, 29-31 August 2006 (https://books.go
ogle.com/?id=FZiK3zXdK7sC&pg=PA847). World Scientific. p. 847. ISBN 9789812709998.
29. "The History of Video File Formats Infographic" (http://www.real.com/resources/digital-video-file-formats/
). RealNetworks. 22 April 2012. Retrieved 5 August 2019.
30. "ITU-T Recommendation declared patent(s)" (https://www.itu.int/ITU-T/recommendations/related_ps.asp
x?id_prod=1088). ITU. Retrieved 12 July 2019.
31. "MPEG-2 Patent List" (https://www.mpegla.com/wp-content/uploads/m2-att1.pdf) (PDF). MPEG LA.
Retrieved 7 July 2019.
32. Shishikui, Yoshiaki; Nakanishi, Hiroshi; Imaizumi, Hiroyuki (October 26–28, 1993). "An HDTV Coding
Scheme using Adaptive-Dimension DCT" (https://books.google.com/books?id=j9XSBQAAQBAJ&pg=PA
611). Signal Processing of HDTV: Proceedings of the International Workshop on HDTV '93, Ottawa,
Canada. Elsevier: 611–618. doi:10.1016/B978-0-444-81844-7.50072-3 (https://doi.org/10.1016%2FB97
8-0-444-81844-7.50072-3). ISBN 9781483298511.
33. "MPEG-4 Visual - Patent List" (https://www.mpegla.com/wp-content/uploads/m4v-att1.pdf) (PDF).
MPEG LA. Retrieved 6 July 2019.
MPEG LA. Retrieved 6 July 2019.
34. "Video Developer Report 2019" (https://cdn2.hubspot.net/hubfs/3411032/Bitmovin%20Magazine/Video%
20Developer%20Report%202019/bitmovin-video-developer-report-2019.pdf) (PDF). Bitmovin. 2019.
Retrieved 5 November 2019.
35. "AVC/H.264 – Patent List" (https://www.mpegla.com/wp-content/uploads/avc-att1.pdf) (PDF). MPEG LA.
Retrieved 6 July 2019.
36. Wang, Hanli; Kwong, S.; Kok, C. (2006). "Efficient prediction algorithm of integer DCT coefficients for
H.264/AVC optimization". IEEE Transactions on Circuits and Systems for Video Technology. 16 (4):
547–552. doi:10.1109/TCSVT.2006.871390 (https://doi.org/10.1109%2FTCSVT.2006.871390).
37. https://blogs.cisco.com/collaboration/world-meet-thor-a-project-to-hammer-out-a-royalty-free-video-
codec
38. Thomson, Gavin; Shah, Athar (2017). "Introducing HEIF and HEVC" (https://devstreaming-cdn.apple.co
m/videos/wwdc/2017/503i6plfvfi7o3222/503/503_introducing_heif_and_hevc.pdf) (PDF). Apple Inc.
Retrieved 5 August 2019.
39. "HEVC Patent List" (https://www.mpegla.com/wp-content/uploads/hevc-att1.pdf) (PDF). MPEG LA.
Retrieved 6 July 2019.
40. "ISO Standards and Patents" (https://www.iso.org/iso-standards-and-patents.html). ISO. Retrieved
10 July 2019.
41. Davis, Andrew (13 June 1997). "The H.320 Recommendation Overview" (https://www.eetimes.com/docu
ment.asp?doc_id=1275886). EE Times. Retrieved 7 November 2019.
42. IEEE WESCANEX 97: communications, power, and computing : conference proceedings (https://books.
google.com/?id=8vhEAQAAIAAJ). University of Manitoba, Winnipeg, Manitoba, Canada: Institute of
Electrical and Electronics Engineers. May 22–23, 1997. p. 30. ISBN 9780780341470. "H.263 is similar
to, but more complex than H.261. It is currently the most widely used international video compression
standard for video telephony on ISDN (Integrated Services Digital Network) telephone lines."
43. "Motion JPEG 2000 Part 3" (https://www.webcitation.org/6ArTdULNs?url=http://www.jpeg.org/jpeg2000/j
2kpart3.html). Joint Photographic Experts Group, JPEG, and Joint Bi-level Image experts Group, JBIG.
Archived from the original (http://www.jpeg.org/jpeg2000/j2kpart3.html) on 22 September 2012.
Retrieved 21 June 2014.
44. Taubman, David; Marcellin, Michael (2012). JPEG2000 Image Compression Fundamentals, Standards
and Practice: Image Compression Fundamentals, Standards and Practice (https://books.google.com/?id
=y7HeBwAAQBAJ&pg=PA402). Springer Science & Business Media. ISBN 9781461507994.
45. Swartz, Charles S. (2005). Understanding Digital Cinema: A Professional Handbook (https://books.googl
e.com/?id=tYw3ehoBnjkC&pg=PA147). Taylor & Francis. p. 147. ISBN 9780240806174.
46. "VC-1 Patent List" (https://www.mpegla.com/wp-content/uploads/vc-1-att1.pdf) (PDF). MPEG LA.
Retrieved 11 July 2019.
47. "HEVC Advance Patent List" (https://www.hevcadvance.com/licensors/). HEVC Advance. Retrieved
6 July 2019.
48. Bhojani, D.R. "4.1 Video Compression" (http://shodh.inflibnet.ac.in/bitstream/123456789/821/5/05_hypot
hesis.pdf) (PDF). Hypothesis. Retrieved 6 March 2013.
49. Jaiswal, R.C. (2009). Audio-Video Engineering. Pune, Maharashtra: Nirali Prakashan. p. 3.55.
ISBN 9788190639675.
50. Jan Ozer. "Encoding options for H.264 video" (http://www.adobe.com/devnet/adobe-media-server/article
s/h264_encoding.html). Adobe.com. Retrieved 6 January 2015.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Video_coding_format&oldid=1024220760"

This page was last edited on 20 May 2021, at 20:44 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you
agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.

You might also like