Nothing Special   »   [go: up one dir, main page]

CN101626242A - Improved Huffman decoding method and device - Google Patents

Improved Huffman decoding method and device Download PDF

Info

Publication number
CN101626242A
CN101626242A CN200810116546A CN200810116546A CN101626242A CN 101626242 A CN101626242 A CN 101626242A CN 200810116546 A CN200810116546 A CN 200810116546A CN 200810116546 A CN200810116546 A CN 200810116546A CN 101626242 A CN101626242 A CN 101626242A
Authority
CN
China
Prior art keywords
stage
level
huffman
search unit
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200810116546A
Other languages
Chinese (zh)
Other versions
CN101626242B (en
Inventor
张盈华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Guangsheng Research And Development Institute Co ltd
Original Assignee
Digital Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digital Technology Beijing Co ltd filed Critical Digital Technology Beijing Co ltd
Priority to CN200810116546.8A priority Critical patent/CN101626242B/en
Publication of CN101626242A publication Critical patent/CN101626242A/en
Application granted granted Critical
Publication of CN101626242B publication Critical patent/CN101626242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention discloses a Huffman decoding method. The method comprises the following steps: obtaining a first-stage search of a first-stage search unit having three components of a two-stage Huffman codebook based on a data stream to be decoded; a first-level judgment for judging whether the first-level search unit is a leaf node or a root node; if the first-stage search unit is judged to be a leaf node, outputting a third component and a second component of the first-stage search unit as the bit number of the decoded data and the Huffman code word respectively; otherwise, the method of the invention also carries out the second-stage search, and obtains the bit number of the decoding data and the Huffman code word according to the second-stage search unit obtained by the second-stage search. The invention also provides a Huffman decoding device corresponding to the method. By the method and the device, Huffman decoding, particularly DRA Huffman decoding, can be optimized remarkably.

Description

Improved Huffman decoding method and device
Technical Field
[001] The present invention relates to an improved huffman decoding method and apparatus, and more particularly, to an improved audio huffman decoding method and apparatus for a DRA audio codec system.
Background
[002] Multimedia technology has been rapidly developed as people enter the information age, and consequently, a large amount of audio and video information is widely used. For example, in the multimedia technology field of digital television, IPTV, DVD, etc., a large amount of audio and video information is provided to users in the form of transmission-play and/or storage-play. On the other hand, the audio and video information is very large in data size, which causes inconvenience in transmission and storage. Therefore, the original audio and video needs to be compressed and encoded during transmission and/or storage, and the original audio or video is restored through decoding during playing.
[003] In order to compress encoded audio and video data, a series of audio compression algorithms and video compression algorithms have appeared in the prior art, the most common of which are MPEG series audio compression algorithms (e.g., mp3, MPEG-2 AAC, MPEG-4 AAC, etc.). In the MPEG series audio compression algorithm, in addition to the first compression processing of the original audio signal by means commonly used in the art such as time-frequency transform, psychoacoustic model, and the like, the second compression processing such as huffman entropy coding is performed on the signal subjected to the first compression processing to further compress the amount of data by utilizing statistical redundancy still existing in the signal.
[004] In addition to MPEG series algorithms, there are other audio compression algorithms in the prior art that utilize huffman coding, such as the DRA audio codec technique referred to herein, which was autonomously developed by association, guang sandisk technologies, inc. As shown in fig. 1A and 1B, DRA audio encoding and decoding technology, which is a standard of chinese audio coding electronics industry, has been known in the industry in recent years. For more details on DRA audio codec technology, reference may be made to the industry standard published by the information industries, 2007, 1, 4, and having the standard serial number SJ/T11368-2006, the entire contents of which are incorporated herein by reference. For convenience of description, the standard will be hereinafter referred to simply as "DRA standard", and a DRA audio codec technology corresponding to the DRA standard will be hereinafter referred to simply as "DRA technology".
[005] Although huffman coding produces good coding results and greatly improves the coding efficiency (the term "coding efficiency" is defined herein as the ratio of the amount of original audio signal data to the amount of coded audio signal data), it is not without drawbacks. For example, when the huffman code word is long and the total number of code words is large, when the encoded audio signal is inverse-transformed in the conventional single-stage huffman decoding method: the average time of searching code books is long, the decoding time difference of different code words is large, and the required storage capacity is large. Specifically to the DRA huffman decoding algorithm: at the DRA decoding end, the time and memory occupied by huffman decoding occupy a significant portion of the total decoding time and total memory requirements. Therefore, providing fast and efficient huffman decoding method becomes the key to optimize the decoding efficiency of DRA decoding end.
[006] In order to solve the above problems, some improved techniques for huffman decoding methods at the audio decoding end have been disclosed in the prior art.
[007] For example, in a paper entitled "improvement of huffman decoding algorithm in audio system" published in the second phase 2005 of "electronic measurement technology", titled "improvement of huffman decoding algorithm in audio system", by plum et al (hereinafter referred to as document 1), an improved huffman decoding method is described, which generates a new codebook of ascending 19-bit code words by padding the lower bits of the code words in an MPEG-1 codebook with zeros; and the speed of Huffman decoding is accelerated in a mode of grouping all code words according to the first 4 bits of the 19-bit code words (the value of the 4 bits is used as a grouping serial number). However, the method of document 1 has at least the following drawbacks: zero padding may be required for the codewords, increasing storage requirements; in the two-step search method mentioned in document 1, the first-step search is fixed to the first 4 bits, which is not necessarily an optimal choice after considering the decoding speed and the storage requirement.
[008] As another example, in a paper entitled "a fast Huffman decoding algorithm for MPEG-2 AAC" (hereinafter referred to as document 2) published in the second edition "microcomputer and application" 2005, by royal Yi, et al, an improved Huffman decoding method is described, which optimizes the Huffman decoding efficiency by: zero padding is carried out on the lower bits of the code words in the AAC code book, so that a new code book with 16-bit code words in ascending order is generated; and the first code word of each code length is taken to form a positioning table when the original code word is taken; and determining the original Huffman code word sequence number in a mode of base address + offset provided by a positioning table. However, the method of document 2 has at least the following drawbacks: zero padding may be required for the codewords, increasing storage requirements; the positioning table needs to be traversed each time to search the corresponding position of the read binary code in the positioning table, and the calculation amount is increased.
[009] As another example, in a paper entitled "Huffman algorithm selection and optimization for AAC decoder" published in computer engineering (volume 30) of 12 months 2004 by schroenzhi et al (hereinafter referred to as document 3), improvements of Huffman decoding methods based on step-wise table lookup and binary tree search under the ARM platform are discussed, respectively. However, the method of document 3 has at least the following drawbacks: the improvement on the step-by-step table look-up method is not fine enough, and the optimal result after the size of the code word and the search depth are balanced may not be achieved; the improvement of the binary tree searching method is based on the characteristics of an ARM system, and the method has no universality.
[010] Furthermore, no improved methods are disclosed in the prior art with respect to fast and efficient huffman decoding algorithms for DRA systems.
Disclosure of Invention
[011] In order to solve the above problems and other problems, the present invention provides the following technical solutions.
[012] The invention discloses a Huffman decoding method. The method comprises the following steps: obtaining a first-stage search of a first-stage search unit having three components of a two-stage Huffman codebook based on a data stream to be decoded; a first-level judgment for judging whether the first-level search unit is a leaf node or a root node; if the first-stage search unit is judged to be a leaf node, outputting a third component and a second component of the first-stage search unit as the bit number of the decoded data and the Huffman code word respectively; otherwise, the method of the invention also carries out the second-stage search, and obtains the bit number of the decoding data and the Huffman code word according to the second-stage search unit obtained by the second-stage search. In addition, the invention also provides a corresponding Huffman decoding device.
[013] Based on the technical scheme, the invention realizes fast and efficient audio Huffman decoding.
[014] Aiming at the characteristics of the DRA codebook, the invention also provides a further DRA Huffman decoding method, which determines the optimal first-stage read-in bit number x aiming at the DRA codebook in a comprehensive cost function or three-stage judgment mode.
[015] Based on the further technical scheme, the invention realizes fast and efficient audio Huffman decoding aiming at the DRA technology.
Drawings
[016] The subject matter of the invention will be explained in more detail hereinafter with reference to preferred exemplary embodiments illustrated by the appended drawings, in which like reference numerals represent the same or equivalent elements. In the drawings:
[017] FIGS. 1A and 1B are block diagrams illustrating DRA audio codecs and decoders, respectively;
[018] fig. 2 is a flow chart illustrating an improved huffman decoding method according to a first embodiment of the invention;
[019] fig. 3 is a block diagram showing an improved huffman decoding apparatus according to a first embodiment of the present invention;
[020] fig. 4 is a flow chart illustrating an improved DRA huffman decoding method according to a second embodiment of the invention;
[021] fig. 5 is a block diagram illustrating an improved DRA huffman decoding device according to a second embodiment of the invention;
[022] FIG. 6 is a flow chart detailing the x-value calculation steps: x calculation step 3101' corresponding to step 3101A; and
[023] FIG. 7 is a block diagram detailing the x-value calculation module: an initial bit calculation block 4101' corresponding to block 4101A.
Detailed Description
[024] Preferred embodiments of the invention will be described hereinafter by way of the accompanying drawings. In the following description, functions or constructions that have become known in the art are not described in detail since they would obscure the description of the invention with unnecessary detail.
[025] A typical DRA audio encoder 10 is shown in fig. 1A, which may be implemented in hardware, software, and/or firmware. In short, the DRA standard is directed to techniques for signal processing of source sound (e.g., input PCM samples) in a number of technical modules for the purpose of "coding defects that are hardly audible" to compress the source sound. The above-mentioned multiple technical modules include but are not limited to: transient analysis module 20, multi-resolution filterbank module 22, linear scalar quantization module 30, quantization index coding module 32, codebook selection module 34, human ear auditory model module 40, global bit allocation module 42, and multiplexing module 50. According to the relevant provisions of the DRA standard, the technical module is a necessary module, namely, the DRA output code stream (namely, the DRA standard code stream) which accords with the standard is the code stream processed by the module. The above modules can be functionally divided into four groups, namely a multi-resolution analysis group (comprising a transient analysis module 20, a multi-resolution filter bank module 22), a quantization group (comprising a linear scalar quantization module 30, a quantization index coding module 32, a codebook selection module 34), a psychoacoustic model group (comprising a human auditory model module 40, a global bit allocation module 42), and a MUX group (multiplexing module 50). The invention is primarily concerned with the quantization groups described above.
[026] In fig. 1B, a DRA audio decoder 110 is shown that decodes a DRA encoded code stream to obtain a DRA decoded signal (i.e., a PCM sample output). The flow associated with the DRA huffman decoding method will be discussed below in conjunction with fig. 1B: firstly, receiving a DRA code stream at a multi-path demultiplexing module 150, and extracting control information and data information in the DRA code stream; then, the codebook selection information is transmitted to the codebook selection module 134, and the quantization index module 132 and the quantization unit number module are controlled by the codebook selection module; the quantization index module 132 decodes the data from the demultiplexing module 150 and the control information from the codebook selection module 134 to obtain a quantization index; finally, the inverse quantization module 130 inversely quantizes the decoded quantization index based on the decoded quantization index and the control information provided by the quantization unit number module.
The first embodiment: optimized Huffman decoding techniques
[027] The improved huffman decoding method 1000 according to the first embodiment of the present invention will be explained in detail below with reference to fig. 2, and may be used not only in DRA audio codec systems, but also in other audio and/or video codec systems, and is therefore referred to as "general improved huffman decoding method" in the following.
[028] For ease of discussion, 1 huffman codebook to be decoded is given by way of example (shown in table 1), which can be used in any known audio-video codec system. The codebook is arranged into 2 levels, each level of node consists of three variables, and the general formula of the first level of node is marked as Structure1(X, Determin1, Determin 2); the second level node is denoted Structure2(X, Determin1, Determin 2). For example, corresponding to the first level node numbered 2 (i.e., node (1, 3, 0)), X is 1, derrmin 1 is 3, and derrmin 2 is 0. For another example, corresponding to the 3 rd second level node numbered 4 (i.e., node (5, 23, 4)), X ═ 5, derrmin 1 ═ 23, and derrmin 2 ═ 4.
Table 1 huffman codebook example (parallel)
Numbering First level node Second level node
0 (2,3,0)
1 (2,3,0)
2 (1,3,0)
3 (1,3,0)
4 (16,0,12) (1,10,1)(0,15,2)(5,23,4)(13,22,5)(9,25,5)(12,26,5)(15,31,5)(29,24,6)(16,28,6)(28,29,6)(35,27,7)(34,30,7)
5 (0,4,0)
6 (28,0,6) (3,12,2)(1,13,2)(0,14,2)(4,18,3)(10,20,4)(11,21,4)
7 (4,4,0)
8 (9,4,0)
9 (8,4,0)
10 (7,4,0)
11 (34,0,4) (1,11,1)(0,16,2)(2,17,3)(3,19,3)
12 (3,3,0)
13 (3,3,0)
14 (5,4,0)
15 (6,4,0)
[029] All nodes of table 1 are divided into two groups (i.e., two columns) in parallel to represent, and thus, the huffman codebook shown in table 1 may be referred to as a parallel huffman codebook. Accordingly, there is also a serial huffman codebook arrangement method in which all nodes are sequentially arranged in a column (i.e., grouped) and a first-stage node is ahead and a second-stage node is sequentially arranged behind the first-stage node. A serial huffman codebook corresponding to the parallel huffman codebook shown in table 1 is shown in table 2.
Table 2 huffman code book example (serial)
Figure A20081011654600161
[030] The arrangement of table 2 is briefly described with reference to table 1: the nodes numbered 0-15 in table 2 correspond one-to-one to the first level nodes numbered 0-15 in table 1; the nodes numbered 16-27 in table 2 correspond one-to-one to the 12 second level nodes numbered 4 in table 1; nodes numbered 28-33 in table 2 correspond one-to-one to 6 second level nodes numbered 6 in table 1; nodes numbered 34-37 in table 2 correspond one-to-one to the 4 second level nodes numbered 11 in table 1.
[031] The general improved huffman decoding method 1000 will be described in detail below in conjunction with tables 1-2 and fig. 2. As shown in FIG. 2, a general improved Huffman decoding method 1000 begins at step 1100, followed by reading in x-bit data (x ∈ N, and 4 ≦ x ≦ 8, and x is fixed to 4 in the first embodiment) from the data stream to be decoded in step 1101. Then, in step 1102, this x-bit data in binary representation is converted to a decimal number with this value as the linear index of the first level codebook in table 1. Next, in step 1103, look-up table 1 according to the linear index to obtain a search unit in the first-level codebook. And finishing the first-level search and starting to judge for the first time.
[032] Then, in step 1104, it is determined whether the Determin2 for this search unit is zero: if Determin2 is equal to 0, it indicates that Structure1 is a leaf node, the concrete form of Structure1 becomes (symbol, bit _ used1, 0), and the decoding process goes to step 1110, outputs symbol as decoded data, and outputs bit _ used1 as the number of bits of a Huffman codeword, and then the decoding process goes to step 1109 to end. If Determin2 is not 0, then Structure1 is declared to be the root node, the concrete form of Structure1 becomes (jump _ address, Determin1, num _ of _ subentries), and the decoding process goes to step 1105, calculating the starting position orig (i.e., the index position in Table 2) of the second level Huffman decoding search. Specifically, according to three variable values of Structure1 in the case that detemmin 2 is not 0, orig, i.e., jump _ address, of the starting position of the second-stage huffman decoding search can be obtained, and according to the value lookup table 2 of orig, the initial second-stage huffman node Structure2 can be obtained in step 1105; and the third variable num _ of _ branches of Structure1 indicates the total number of corresponding second level huffman nodes, which is also referred to as the maximum search depth (max _ depth). Finally, still in step 1105, the cycle variable i is initialized to orig. At this point, the first determination is completed and a second search is initiated (step 1106).
[033] In the second level search, the second level huffman node Structure2 is a leaf node, which may be represented in a specific form as Structure2(codeword, symbol, bit _ used 2). In step 1106, bit _ used2(i) bits of binary data are read in and converted into a codeword to be compared, C _ cw, where bit _ used2(i) represents the value of bit _ used2 for the node numbered i in Table 2. Next, in step 1107, C _ cw is compared with the codeword (i) whether it is equal, where codeword (i) represents the codeword value of the codeword numbered i in table 2. If C _ cw ═ codeword (i), go to step 1108; otherwise, add 1 to i and proceed to step 1106-1107. In practice, step 1107 is repeated no more than the maximum search depth (max _ depth). In step 1108, symbol corresponding to the ith node is output as decoded data, and bit _ used2+ x is output as the number of bits of the huffman codeword. The decoding process then proceeds to step 1109 where it ends.
[034] In the following, the first-level huffman nodes are the leaf nodes and the root nodes, and the example is described with reference to tables 1 to 2.
[035]For the case where the first-level huffman node is a leaf node, for example, 4 bits 0001 read in step 11012(in this context, XYZ2Data XYZ representing a binary representation) indicates: if the index of the first level codebook is 1, look-up table 1 in step 1103 to obtain node (2, 3, 0). In step 1104, it is determined that detemmin 2 is 0, so that node (2, 3, 0) is a leaf node, 2 is output as decoded data in step 1110, 3 is output as the number of bits of the huffman codeword, and then the decoding process goes to step 1109 to end.
[036]For the case where the first-level huffman node is the root node, for example, 4 bits 0110 read in at step 11012Indicating that: the first level codebook index is 6, then look-up 1 in step 1103 results in a node of (28, 0,6). In step 1104, it is determined that Determin2 is not 0, and therefore node (28, 0, 6) is the root node. Further, in step 1105, it is calculated: the second level search begins with node number 28(orig 28) of table 2; and the total number of second level nodes corresponding to nodes (28, 0, 6) is 6 (i.e., corresponding to nodes numbered 28-33 in table 2); the loop variable is initialized to 28. Then, the binary data 10 of 2 bits is read in again, bit _ used2(28)2And converts it to the parity codeword C _ cw ═ 2. In step 1107, it can be determined that C _ cw is 2 ≠ codeword (28) ≠ 3, so that when i is increased by 1 (i.e. i is 29) and step 1106 ≠ 1107 … … is repeated until the 4 th repetition, bit _ used2(31) ≠ 3, and the read-in 3-bit data is 1002The corresponding C _ cw is 4, and codeword (31) is 4 — C _ cw, indicating that a eligible node (i.e., node numbered 31 in table 2) has been found. At this time, the decoding process proceeds to step 1108, where symbol _ 18 corresponding to the node numbered 31 in table 2 is output as decoded data, and bit _ used2+ x + 3+4 is output as the number of bits of the huffman codeword. The decoding process then proceeds to step 1109 where it ends.
[037] The improved huffman decoding apparatus 2000 according to the first embodiment of the present invention will be described in detail below with reference to fig. 3, and may be used not only in DRA audio codec systems but also in other audio and/or video codec systems, and will be referred to as "general improved huffman decoding apparatus" hereinafter.
[038] The huffman code book to be decoded is still scheduled to be 2 levels, as described earlier, each level of nodes is composed of three variables, and the general formula of the first level of nodes can be represented as Structure1(X, detemmin 1, detemmin 2) and the second level of nodes can be represented as Structure2(X, detemmin 1, detemmin 2). The specific arrangement of the codebooks is as described in tables 1-2.
[039] The general improved huffman decoding apparatus 2000 is described in detail below with reference to tables 1-2 and fig. 2. As shown in fig. 3, a general improved huffman decoding apparatus 2000 comprises: a buffer module 2101 capable of reading x-bit data (x belongs to N, x is more than or equal to 4 and less than or equal to 8, and x is still fixed to 4) from the code stream and outputting the data; a linear index conversion module 2102 for receiving the output x bits, converting the x bits into decimal numbers, and outputting the decimal numbers as linear indexes of the first-level codebook in table 1; receive the linear index and look up table 1 that is outputted above-mentioned, and then output the search module 2103 of the corresponding search unit in the first level codebook. The functions of the three modules 2101 and 2103 are as follows: a first level search is performed.
[040] The general modified huffman decoding apparatus 2000 further comprises: and a judging module 2104 for receiving the output of the searching unit, judging whether the detemin 2 of the searching unit is zero, and selecting the output. The judgment module 2104 shown in fig. 3 is a "one-out-two-input-one-out" module, that is, the module selects one of two output paths to output according to the judgment result. Specifically, if the detemmin 2 is equal to 0, the determination module 2104 decides that the concrete form of Structure1 is (symbol, bit _ used1, 0), and outputs symbol as decoded data and bit _ used1 as the number of bits of a huffman codeword to the first result output module 2110, and then outputs the huffman-decoded data from the first result output module 2110. On the contrary, if the destinm 2 is not 0, the judging module 2104 decides that the concrete form of Structure1 is (jump _ address, destinm 1, num _ of _ subentries), and outputs jump _ address, num _ of _ subentries to the second-stage huffman decoding start position calculating module 2105. The second-stage huffman decoding initial position calculating module 2105 calculates the initial position orig of the second-stage huffman decoding search according to the formula orig as jump _ address, and obtains an initial second-stage huffman node Structure2 according to the value table lookup 2; and module 2105 takes the value of num _ of _ subries as the total number of corresponding second level huffman nodes and the maximum search depth (max _ depth). Finally, in block 2105, the round-robin variable i is initialized to equal orig and the three quantities Structure2, max _ depth, and i are finally output. The modules 2104 and 2110 or 2104 and 2105 have the following functions: a first decision is made and a second level search is ready to begin (for the case of branch of block 2105).
[041] For the case of the branch of module 2105, the general improved huffman decoding apparatus 2000 further comprises: the loop and codeword comparison module 2106 receives the outputted Structure2, max _ depth and i, and determines that the specific data storage format of the Structure2 is Structure2 (coded, symbol, bit _ used 2). The loop and codeword comparison module 2106 then reads in the binary data of bit _ used2(i) bits and converts it into the codeword to be compared C _ cw. Then, in the loop and code word comparing module 2106, whether the corresponding code words codeword (i) corresponding to C _ cw and i are equal or not is compared. If C _ cw is coded (i), the loop and codeword comparison module 2106 outputs symbol in the initial second-stage huffman node Structure2 as decoded data, and bit _ used2 in the initial second-stage huffman node Structure2 as bits of huffman codeword to the second result output module 2108, and then outputs the huffman decoded data from the second result output module 2108. On the contrary, if it is determined in the loop and codeword comparison module 2106 that C _ cw ≠ codeword (i), then module 2106 adds 1 to i, continues to search for the ith node in table 2, and repeats the step of reading bit _ used2(i) - > converting into the codeword to be compared C _ cw- > determining whether C _ cw and codeword (i) are equal, until C _ cw ≠ codeword (i). When a required node is found by loop search (i.e., a suitable i is determined), the loop and codeword comparison module 2106 outputs symbol in the second stage huffman node numbered i in table 2 as decoded data, and the bit _ used2 value plus x in the node as the number of huffman codeword bits to the second result output module 2108, and then outputs the huffman decoded data from the second result output module 2108.
[042] Preferably, an error correction module (not shown) may be added to the loop and codeword comparison module 2106, which detects the number of loops in the loop and codeword comparison module 2106, and when the number is equal to the value of max _ depth, the error correction module reports an error.
[043] Compared with the first-level Huffman decoding method and device in the prior art, the Huffman decoding method and device according to the first embodiment of the invention can enable codebooks with high occurrence probability to be searched faster, and therefore decoding efficiency is improved. Further, in the second-stage node of the huffman codebook used in the method for huffman decoding according to the first embodiment of the present invention, bit _ used2 monotonically rises, while bit _ used2 inversely correlates with the probability of occurrence of a codeword, so that a high probability of a codeword can be decoded more quickly.
[044] It can be understood by those skilled in the art that the huffman codebook grading scheme according to the first embodiment of the present invention is preferably two-level, regardless of whether the general modified huffman decoding method 1000 or the general modified huffman decoding apparatus 2000. But is not limited thereto, and more than two stages of the modified huffman codebook may be constructed according to the codebook characteristics. The number x of data bits to be read first in the encoded code stream is not limited to 4, and may be fixedly selected to be 5, 6, 7, or 8.
Second embodiment: improved DRA huffman decoding technique
[045] The following embodiments are directed to features of DRA technology, and further improve it on the basis of "general improved huffman decoding method", herein referred to as "improved DRA huffman decoding method".
[046] First, in conjunction with the description of fig. 1A-1B and in accordance with the DRA standard, DRA huffman decoding is primarily used to: decoding transient segment lengths (see section 5.4.3 of the DRA standard), decoding codebook application ranges (see section 5.5.2 table 20 of the DRA standard), decoding codebook indices (see section 5.5.3 table 21 of the DRA standard), decoding quantization indices for subband samples based on selected codebook indices (see section 22-23 of the DRA standard 5.6), and decoding quantization step size indices (see section 5.7 table 25 of the DRA standard). In addition, the quantization step size can be obtained based on the quantization step size index through DRA standard attached table B.1. Finally, based on the quantization index and the quantization step size, subband samples are obtained in an inverse quantization module 130 (see fig. 1B) (see section 6.4 of the DRA standard). In the DRA standard, 27 huffman code books Hufftab01-Hufftab27 are provided for each decoding requirement in common. Based on the respective characteristics of each codebook, the huffman decoding method described above can be further optimized.
[047] For convenience of description, only the calculation results for 27 huffman codebooks of DRA are given herein (see table 3), and specific codebook data can be seen in the corresponding part of the DRA standard appendix. As will be appreciated by those skilled in the art, optimization of DRA huffman decoding can be achieved by reading the description of the present embodiment in conjunction with the actual 27 DRA huffman code books.
[048] In particular, fig. 4 shows a modified DRA huffman decoding method 3000 according to a second embodiment of the invention. The method is substantially the same as the general modified huffman decoding method 1000 described above in connection with fig. 2 (steps 3101-. The x-value calculation step 3101' related to step 3101A will be explained in detail later with reference to fig. 6.
[049] In addition, an improved DRA huffman decoding apparatus 4000 is also shown in fig. 5. The decoding apparatus is substantially the same as the general improved huffman decoding apparatus 2000 described above in connection with fig. 3 (the blocks 4101-. In the initial bit reading module 4101A, the value of x is read from the encoded code stream, and the operation of the module 4101 is guided by the value of x. The x-value calculation module 4101' associated with the module 4101A will be explained in detail later in conjunction with fig. 7.
[050] In order to further optimize huffman decoding for DRA codebook characteristics, a trade-off needs to be made between the number x of search bits required at the first level and the maximum search depth (max _ depth) at the second level according to the codebook characteristics: in the case where the total number of terms of the two-stage codebook corresponding to x is not increased much (e.g., not more than a certain multiple of the total number of terms of the two-stage decoding codebook in the case where x is 4), an inflection point of the second-stage maximum search depth is found. Finally, the following purposes can be further achieved: under the condition that the total number of code book entries is not increased too much, the Huffman decoding speed is improved as much as possible.
[051] Step 3101 'and module 4101' are described in detail below in conjunction with fig. 6 and 7, respectively. Turning first to fig. 6, step 3101', at the encoding end, for calculating and storing x in a DRA encoded code stream, will first be described in detail. As shown in fig. 6, in step 3b, the value of the loop variable x for a certain huffman codebook (e.g., the y-th codebook corresponds to data of the codebook number # Hufftaby row in table 3) is initialized to 4; then in step 3c it is determined whether the size value at which the first stage reads in m bits (size (m), e.g. the corresponding size value in row tab _ mx. dat in table 3) is greater than T1 times the size value at which the first stage reads in 4 bits (i.e. size (4), e.g. the corresponding size value in row tab _4x. dat in table 3). Among them, T1 is the first threshold, and is preferably 1.5 to 2, more preferably 1.8 to 2, and still more preferably 1.9 to 2. If the judgment in the step 3c is yes, directly entering the step 3d, and setting x to be 4; the value of x is then stored in the DRA code stream in step 3h, and step 3101' ends.
[052] If the determination in step 3c is no, proceed to step 3e for a further determination, i.e. to determine whether m is equal to 8. If the judgment in the step 3e is yes, directly entering the step 3d, and setting x to be 4; the value of x is then stored in the DRA code stream in step 3h, and step 3101' ends. If the judgment in the step 3e is negative, the third judgment step 3f is entered, i.e., whether (MD (m) -MD (m +1))/MD (m)) is greater than T2. Wherein md (m) represents the maximum search depth when m bits are read in the first step (as indicated by the corresponding max depth value in row labeled tab _ mx. dat in table 3); t2 is the second threshold value, and is preferably 0.2 to 0.7, more preferably 0.2 to 0.5, and further preferably 0.25 to 0.3.
[053] If the judgment in the step 3f is yes, entering the step 3g, and setting x as m + 1; the value of x is then stored in the DRA code stream in step 3h, and step 3101' ends. And if the judgment in the step 3f is negative, adding 1 to m, returning to the step 3c, and continuing to perform the next round of judgment.
[054] Turning next to fig. 7, a block 4101' at the encoding end for computing and storing x in a DRA encoded code stream is detailed. As shown in fig. 7, the assignment module 4b first initializes the value of the loop variable m for a certain huffman codebook (e.g., the y-th codebook corresponds to the data of the codebook number # Hufftab y row in table 3) to 4; then the first determining module 4c is used to determine whether the size (m) when the first level reads m bits is larger than T1 times of the size (4) when the first level reads 4 bits. Among them, T1 is the first threshold, and is preferably 1.5 to 2, more preferably 1.8 to 2, and still more preferably 1.9 to 2. If the determination in block 4c is yes, a first x value determination block 4d is entered via its first path, which sets x to 4; the value of x is then stored in the DRA code stream in the storage module 4h, and the processing of the entire module 4101' is ended.
[055] If the determination in block 4c is negative, it is determined whether m equals 8, via its second path, into a second determination block 4 e. If the determination in block 4e is yes, a first x value determination block 4d is entered via its first path, which sets x to 4; the value of x is then stored in the DRA code stream in the storage module 4h, and the processing of the entire module 4101' is ended. If the determination in block 4e is negative, it proceeds via its second path to a third determination block 4f, which determines whether (MD (m) -MD (m +1))/MD (m) is greater than T2. Wherein md (m) represents the maximum search depth when m bits are read in the first step (as indicated by the corresponding max depth value in row labeled tab _ mx. dat in table 3); t2 is the second threshold value, and is preferably 0.2 to 0.7, more preferably 0.2 to 0.5, and further preferably 0.25 to 0.3.
[056] If the judgment in the module 4f is yes, directly entering a second x value determination module 4g, which sets x to m + 1; the value of x is then stored in the DRA code stream in the storage module 4h, and the processing of the entire module 4101' is ended. Otherwise, add 1 to m, and go to the first determination module 4c to perform the next round of determination again.
Table 3 analysis for 27 huffman codebooks
Figure A20081011654600241
Figure A20081011654600251
Figure A20081011654600271
The third embodiment: improved DRA huffman decoding technique
[057] The following embodiments address features of the DRA technique and detail yet another improved DRA huffman decoding technique. The 27 codebooks targeted in this embodiment are identical to the 27 DRA huffman codebooks described in the second embodiment in connection with fig. 1A-1B and the DRA standard.
[058] In the two-stage Huffman decoding process, the number x of the first-stage search bits is positively correlated with the size of the total item number of the two-stage codebook, and is negatively correlated with the maximum depth (max _ depth) of the second-stage decoding. The optimized two-level codebook requires a trade-off between the total number of entries and the maximum depth, resulting in a preferred value of x. An integrated cost function may thus be defined which is used in the modified huffman decoding method 5000 (not shown) and the modified huffman decoding apparatus 6000 (not shown), respectively.
[059] The detailed definition of the above-mentioned composite cost function is:
cost(x)=α*max_depth(x)+log2(size(x))*log2(size(0))
wherein α is a weight parameter; x represents the bit number read initially at the first stage of the decoding end; max _ depth (x), size (x) indicate the values corresponding to max _ depth and size in row tab _ xx.dat in table 3; size (0) represents the number of linear codebook entries.
[060]In the improved huffman decoding method 5000, firstly, a value is given to alpha within a certain range, wherein the range is preferably 0.4-1.6, more preferably 0.6-1.4, and further preferably 0.8-1.2; then, cost corresponding to different x values of the y-th codebook (i.e., one of the 27 codebooks mentioned in the DRA algorithm) is calculated respectivelyy(x) The value of (a) is a code book number, and the value range is from 1 to 27; next, record each costy(x) Value x of x at minimumyMin(ii) a Finally, corresponding to the y-th DRA codebook, outputting xyMinA value by which to subsequently direct the decoding of the y-th DRA codebook (i.e., xyMinAs the first stage initial read-in number of bits).
[061]In the modified huffman decoding apparatus 6000, comprising: the assignment module is used for assigning a value to alpha, and the range is preferably 0.4-1.6, more preferably 0.6-1.4, and further preferably 0.8-1.2; a calculating module, configured to calculate cost of the yth DRA codebook (i.e., one of the 27 codebooks mentioned in the DRA algorithm) corresponding to different x values respectivelyy(x) The value of (a) is a code book number, and the value range is from 1 to 27; a recording module for recording each costy(x) Value x of x at minimumyMin(ii) a An output module for outputting x corresponding to the yth DRA codebookyMinA value by which to subsequently direct the decoding of the y-th DRA codebook (i.e., xyMinAs the first stage initial read-in number of bits).
[062] In the improved huffman decoding method 5000 and/or the improved huffman decoding device 6000, a larger alpha value means that the codebook corresponding to x is selected to have a smaller max _ depth and a larger size, i.e. a lower decoding speed and a larger codebook storage space; conversely, a smaller value of α means that the selected codebook for x has a larger max _ depth and a smaller size, i.e. a lower decoding speed and a smaller codebook storage space.
[063]According to one example of the present invention, when α ≦ 1.0 and 4 ≦ m ≦ 8, 27 of Table 3Cost corresponding to code booky(x) And xyMinAs shown in table 4.
TABLE 4 selection of first search bit number m
Number book (y) size(0) costy(4) costy(5) costy(6) costy(7) costy(8) xyMin
1 7 11.23 14.04 16.84 19.65 22.46 4
2 64 62.02 62.35 53.92 56.66 60.23 6
3 32 38.24 39.5 41.52 40.65 44.25 4
4 18 28.26 29.34 31.56 34.42 37.45 4
5 18 31.26 32.49 34.81 37.55 40.52 4
6 116 75.69 74.61 72.28 71.32 72.67 7
7 116 73.85 73.38 74.03 75.04 77.41 5
8 16 25.58 26.99 29.43 32.18 35.07 4
9 16 31.23 32.71 34.84 37.39 40.18 4
10 81 60.16 53.48 53.93 55.29 55.57 5
11 25 28.82 29.35 30.27 32.51 37.15 4
12 81 62.36 54.12 51.86 53.08 55.7 6
13 289 102.43 92.85 86.47 82.81 82.18 8
14 31 32.37 32.96 34.76 36.9 39.63 4
15 63 47.12 47.21 45.46 47.55 51.4 6
16 127 71.68 64.38 60.02 59.47 62.13 7
17 255 102.44 93.91 83.94 77.76 78.74 7
18 256 106.61 87.99 80.83 76.64 76.69 7
19 81 60.16 54.48 52.8 54.13 56.98 6
20 25 30.01 30.5 32.27 34.71 37.15 4
21 81 59.36 55.39 55.31 57.24 57.86 6
22 289 157.35 147.81 117.64 106.18 97.61 8
23 31 34.54 33.51 35.05 37.95 41.69 5
24 63 48.12 46.21 45.53 47.49 51.31 6
25 127 76.75 72.38 66.78 62.38 63.03 7
26 255 128.53 97.11 82.05 75.4 73.1 8
27 256 96.7 81.24 74.28 71.54 71.65 7
[064] It will be appreciated by those skilled in the art that the second and third embodiments described above are not limited to DRA huffman codebooks. After reading the description and claims of the present invention, those skilled in the art can use the optimization methods and apparatuses for DRA huffman codebooks in the second and third embodiments to optimize the first-stage bit number of a general two-stage huffman codebook.
Simulation result
[065] The inventor of the application compares the performances of the three improved Huffman decoding methods provided by the application with the performances of the existing linear search Huffman decoding method in a fixed-point 16-bit digital signal processor Blackfin-533. Both huffman decoding programs are in standard C language, do not contain platform-specific optimization, and run embedded in the actual DRA audio decoder. Although the verification platform is Blackfin-533, the program itself may run on a general purpose PC as well as most embedded processors and digital signal processors. The data given in tables 4-1 to 4-3 is the processor clock cycle consumption for decoding the Huffman decoding part of the DRA code stream of 1 second, wherein the sampling rate of the DRA code stream is 48kHz, and the code rate is 128 kbps.
TABLE 4-1 comparative test for the first embodiment of the present invention
Figure A20081011654600291
[066] For the case of the first embodiment, the number of codebook entries 3112 after modification, the number of primitive entries 2819, i.e., the number of entries, is increased by about 10%.
TABLE 4-2 comparative tests on the second embodiment of the present invention
Figure A20081011654600301
[067] For the case of the second embodiment, the number of codebook entries 4266 is modified, and the number of primitive entries 2819, i.e., the number of entries, is increased by about 51%.
Tables 4-3 comparative tests on the third embodiment of the present invention
Figure A20081011654600302
[068] For the case of the third embodiment, the number of codebook terms is modified 4324, and the number of primitive terms 2819, i.e., terms, is increased by about 53%.
[069] The following conclusions can be drawn from tables 4-1 to 4-3: under the condition of no compiler optimization, the optimized Huffman decoding module consumes 1/6-1/7 of the clock period consumption of a processor per second, which is consumed correspondingly by linear search, namely has a speed-up ratio of 6-7 times; under the condition of compiler optimization, the consumption of the Huffman decoding module per second of processor clock cycles is only 1/4-1/5 of the consumption of linear search, namely the acceleration ratio is 4-5 times. Therefore, the method and the device have a remarkable optimization effect on the Huffman decoding module.
[070] While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood by those skilled in the art that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. As can be appreciated by those skilled in the art: many variations and/or modifications may be made to the invention as shown in the specific embodiments without departing from the spirit or scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.

Claims (26)

1. A huffman decoding method comprising:
a first-level search for obtaining a first-level search unit having three components of a two-level Huffman codebook based on a data stream to be decoded;
a first level of judgment that judges whether the first level of search unit is a leaf node or a root node;
wherein,
if the first-stage search unit is judged to be a leaf node, outputting a third component and a second component of the first-stage search unit as the bit number of decoded data and a Hoffman code word respectively;
otherwise, the huffman decoding method further comprises: a second level search that receives the first level search unit and obtains a second level search unit of the two-level huffman codebook having three components based on a third component of the first level search unit; wherein the Huffman decoding method further comprises:
receiving the second-level searching unit, and performing second-level judgment, wherein the second-level judgment comprises:
p) determining whether the second level search cell is a target cell;
q) if the second stage search unit is determined to be a target unit, the number of bits of decoded data and Huffman codewords will be output as a function of the first and second components of the second stage search unit;
r) if not, resetting the second-stage search unit as the next search unit in the two-stage Huffman codebook, and performing the second-stage judgment again.
2. The huffman decoding method according to claim 1, wherein in the first-stage search, the first-stage search unit is obtained in the two-stage huffman codebook with a first read-in value as an index, wherein the first read-in value is a decimal representation value corresponding to x-bit data read from the data stream to be decoded.
3. The huffman decoding method according to claim 2, wherein x is fixed to 4.
4. The huffman decoding method according to claim 3, wherein if the first level search unit is decided to be a root node, then in the second level decision, when the third component of the second level search unit is equal to a second read value, the second level search unit is decided to be a target unit; otherwise, determining that the second level search cell is not a target cell,
wherein the second read value is a decimal representation of decimal value number of bits data corresponding to the first component of the second level search unit read again from the data stream to be decoded.
5. The Huffman decoding method of claim 4, wherein in the second stage of decision, the function of the first and second components of the second stage search unit is:
decoding data as a second component of the second level search unit; and is
The number of bits of the huffman code word equals to the first component + x of the second level search unit.
6. The huffman decoding method according to claim 3, wherein in the first level judgment, if the first component of the first level search unit is 0, it is judged that the first level search unit is a leaf node; otherwise, the first-level search unit is judged to be a root node.
7. The Huffman decoding method of claim 6, wherein if the first level search unit is determined to be a root node, the Huffman decoding method further comprises an error correction step of recording a number of cycles in the second level determination and reporting an error when the number of cycles is equal to the first component of the first level search unit.
8. A huffman decoding apparatus comprising:
the first-stage searching module is used for obtaining a first-stage searching unit with three components of a two-stage Huffman codebook based on a data stream to be decoded;
the first-level judging module is used for judging whether the first-level searching unit is a leaf node or a root node;
wherein,
if the first-stage search unit is judged to be a leaf node, outputting a third component and a second component of the first-stage search unit as decoded data and the bit number of a Huffman code word respectively through a first result output module;
otherwise, the huffman decoding apparatus further comprises: the second-stage searching module is used for receiving the first-stage searching unit and obtaining a second-stage searching unit with three components of the two-stage Huffman codebook based on a third component of the first-stage searching unit; wherein the Huffman decoding apparatus further comprises:
receiving the second-level search unit receiving module, wherein the receiving module transmits the second-level search unit to a second-level judging module, and the second-level judging module operates as follows:
p) determining whether the second level search cell is a target cell;
q) if the second stage search unit is determined to be a target unit, the number of bits of decoded data and Huffman codewords will be output as a function of the first and second components of the second stage search unit;
r) if not, resetting the second-stage search unit as the next search unit in the two-stage Huffman codebook, and performing the operation of the second-stage judgment module again.
9. The huffman decoding device according to claim 8, wherein in the first stage searching module, the first stage searching unit is obtained in the two stage huffman codebook using a first read value as an index, wherein the first read value is a decimal representation value corresponding to x-bit data read from the data stream to be decoded.
10. The huffman decoding device according to claim 9, wherein x is fixed to 4.
11. The huffman decoding apparatus of claim 10, wherein if the first level searching unit is determined to be a root node, in the second level determining module, when the third component of the second level searching unit is equal to the second read value, the second level searching unit is determined to be a target unit; otherwise, determining that the second level search cell is not a target cell,
wherein the second read value is a decimal representation of decimal value number of bits data corresponding to the first component of the second level search unit read again from the data stream to be decoded.
12. The huffman decoding apparatus of claim 11, wherein in the second stage decision module, the function of the first and second components of the second stage search unit is:
decoding data as a second component of the second level search unit; and is
The number of bits of the huffman code word equals to the first component + x of the second level search unit.
13. The huffman decoding device of claim 10, wherein in the first level determining module, if the first component of the first level searching unit is 0, the first level searching unit is determined to be a leaf node; otherwise, the first-level search unit is judged to be a root node.
14. The huffman decoding device of claim 13, wherein if the first level searching unit is determined to be the root node, the huffman decoding device further comprises an error correction module for recording the number of cycles in the second level determining module and reporting an error when the number of cycles is equal to the first component of the first level searching unit.
15. A DRA huffman decoding method comprising:
reading a value of x from a data stream to be decoded, and taking a decimal representation value corresponding to the x bit data in the data stream to be decoded as a first read value;
a first-stage search for obtaining a first-stage search unit having three components in a two-stage DRA Huffman codebook with the first read-in value as an index,
a first level of judgment that judges whether the first level of search unit is a leaf node or a root node;
wherein,
if the first-stage search unit is judged to be a leaf node, outputting a third component and a second component of the first-stage search unit as the bit number of decoded data and a Hoffman code word respectively;
otherwise, the DRA huffman decoding method further comprises: a second level search that receives the first level search unit and obtains a second level search unit of the two-level DRA huffman codebook having three components based on a third component of the first level search unit; wherein the DRA huffman decoding method further comprises:
receiving the second-level searching unit, and performing second-level judgment, wherein the second-level judgment comprises:
p) determining whether the second level search cell is a target cell;
q) if the second stage search unit is determined to be a target unit, the number of bits of decoded data and Huffman codewords will be output as a function of the first and second components of the second stage search unit;
r) if not, resetting the second-stage search unit as the next search unit in the two-stage DRA Huffman codebook, and performing the second-stage judgment again.
16. The DRA huffman decoding method according to claim 15, wherein x is determined by:
x) respectively calculating the comprehensive cost function cost of the yth two-stage DRA Huffman codebook according to a given weight coefficient alphay(x) Wherein y is 1, 2, 3, … … 27;
y) respectively corresponding to the y two-stage DRA Huffman code book, and recording the current state of the synthesisNumerical value x of first-stage read-in bit number x when the cost function takes minimum valueyMin
z) outputting said xyMinTaking the value as the optimal first-stage reading-in bit number corresponding to the y two-stage DRA Huffman codebook, and taking the xyMinStoring a value in the data stream to be decoded;
the comprehensive cost function corresponding to the y two-stage DRA Huffman codebook is a function of the second-stage maximum search depth max _ depth (x), the codebook size value size (x) and the linear codebook item number size (0) of the y two-stage DRA Huffman codebook.
17. The DRA huffman decoding method according to claim 16, wherein the general formula of the synthetic cost function is:
cost(x)=α*max_depth(x)+log2(size(x))*log2(size(0))。
18. the method of claim 17, wherein: α is 1.
19. The DRA huffman decoding method according to claim 15, wherein x is determined by:
a) initializing the value of a cyclic variable m for the y two-level DRA huffman codebook to 4;
b) judging whether the size value when the m bits are read in the first stage is larger than T1 times of the size value when the 4 bits are read in the first stage;
c) if b) is true, go to step d); otherwise, go to step e);
d) setting the x to 4, and then turning to step h);
e) if m is equal to 8, go to step d); otherwise go to step f);
f) if (MD (m) -MD (m +1))/MD (m) is greater than T2, go to step g), otherwise make m add 1, go to step b);
wherein MD (m) represents the maximum search depth when m bits are read in the first step,
g) setting said x to m + 1; and
h) storing the value of x in the data stream to be decoded.
20. The DRA huffman decoding method according to claim 19, wherein the T1 is 2 and the T2 is 0.25.
21. A DRA huffman decoding apparatus comprising:
an initial bit reading module, which reads the value of x from the data stream to be decoded, and takes the decimal representation value corresponding to the x bit data in the data stream to be decoded as a first read value;
a first-stage searching module for obtaining a first-stage searching unit with three components in a two-stage DRA Huffman codebook by using the first read-in value as an index,
the first-level judging module is used for judging whether the first-level searching unit is a leaf node or a root node;
wherein,
if the first-stage search unit is judged to be a leaf node, outputting a third component and a second component of the first-stage search unit as decoded data and the bit number of a Huffman code word respectively through a first result output module;
otherwise, the DRA huffman decoding device further comprises: a second-stage search module, configured to receive the first-stage search unit, and obtain a second-stage search unit having three components for the two-stage DRA huffman codebook based on a third component of the first-stage search unit; wherein the DRA huffman decoding device further comprises:
receiving the second-level search unit receiving module, wherein the receiving module transmits the second-level search unit to a second-level judging module, and the second-level judging module operates as follows:
p) determining whether the second level search cell is a target cell;
q) if the second stage search unit is determined to be a target unit, the number of bits of decoded data and Huffman codewords will be output as a function of the first and second components of the second stage search unit;
r) if not, resetting the second-stage search unit as the next search unit in the two-stage DRA Huffman codebook, and performing the operation of the second-stage judgment module again.
22. The DRA huffman decoding apparatus according to claim 21, wherein the x is determined by an x-value calculation module comprising:
a comprehensive cost function calculating module, which is used for respectively calculating the comprehensive cost function cost of the yth two-stage DRA Huffman codebook according to the given weight coefficient alphay(x) Wherein y is 1, 2, 3, … … 27;
a recording module, configured to record a value x of a first-stage read-in bit number x when the comprehensive cost function takes a minimum value, where the value x corresponds to a yth two-stage DRA huffman codebook respectivelyyMin
An output module for outputting the xyMinTaking the value as the optimal first-stage reading-in bit number corresponding to the y two-stage DRA Huffman codebook, and taking the xyMinA value is stored in the data stream to be decoded,
the comprehensive cost function corresponding to the y two-stage DRA Huffman codebook is a function of the second-stage maximum search depth max _ depth (x), the codebook size value size (x) and the linear codebook item number size (0) of the y two-stage DRA Huffman codebook.
23. The DRA huffman decoding apparatus according to claim 22, wherein the general formula of the synthetic cost function is:
cost(x)=α*max_depth(x)+log2(size(x))*log2(size(0))。
24. the DRA huffman decoding device according to claim 23, wherein: α is 1.
25. The DRA huffman decoding apparatus according to claim 21, wherein the x is determined by an x-value calculation module comprising:
the assigning module is used for initializing the value of a cyclic variable m aiming at the y two-stage DRA Huffman codebook to 4;
the first judging module is used for judging whether the size value when the m bits are read in the first stage is larger than T1 times of the size value when the 4 bits are read in the first stage;
if the judgment result in the first judgment module is true, turning to a first x value determination module; otherwise, go to the second judging module;
the first x value determining module is used for setting x to be 4 and transferring the x to the output module;
in the second decision block, if m is equal to 8, go to the first x value determination block; otherwise, turning to a third judging module;
in the third judging module, if (MD (m) -MD (m +1))/MD (m) is greater than T2, go to a second x value determining module, otherwise, make m add 1, and go to the first judging module; wherein MD (m) represents the maximum search depth when m bits are read in the first step,
the second x value determining module sets x as m +1 and transfers the x to the output module; and
and the output module stores the value of x in the data stream to be decoded.
26. The DRA huffman decoding device according to claim 25, wherein the T1 is 2 and the T2 is 0.25.
CN200810116546.8A 2008-07-11 2008-07-11 Improved Huffman decoding method and device Active CN101626242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810116546.8A CN101626242B (en) 2008-07-11 2008-07-11 Improved Huffman decoding method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810116546.8A CN101626242B (en) 2008-07-11 2008-07-11 Improved Huffman decoding method and device

Publications (2)

Publication Number Publication Date
CN101626242A true CN101626242A (en) 2010-01-13
CN101626242B CN101626242B (en) 2014-04-16

Family

ID=41521948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810116546.8A Active CN101626242B (en) 2008-07-11 2008-07-11 Improved Huffman decoding method and device

Country Status (1)

Country Link
CN (1) CN101626242B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894556A (en) * 2010-05-26 2010-11-24 北京红旗胜利科技发展有限责任公司 Method and device for testing audio decoding program
CN105207677A (en) * 2015-09-01 2015-12-30 上海斐讯数据通信技术有限公司 Graphical coding and decoding system and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6891976B2 (en) * 2002-03-12 2005-05-10 Intel Corporation Method to decode variable length codes with regular bit pattern prefixes
CN1826732A (en) * 2003-09-02 2006-08-30 诺基亚公司 Huffman coding and decoding
CN101051845A (en) * 2007-05-09 2007-10-10 上海广电(集团)有限公司中央研究院 Huffman decoding method for quick extracting bit stream
US7822601B2 (en) * 2002-09-04 2010-10-26 Microsoft Corporation Adaptive vector Huffman coding and decoding based on a sum of values of audio data symbols

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6891976B2 (en) * 2002-03-12 2005-05-10 Intel Corporation Method to decode variable length codes with regular bit pattern prefixes
US7822601B2 (en) * 2002-09-04 2010-10-26 Microsoft Corporation Adaptive vector Huffman coding and decoding based on a sum of values of audio data symbols
CN1826732A (en) * 2003-09-02 2006-08-30 诺基亚公司 Huffman coding and decoding
CN101051845A (en) * 2007-05-09 2007-10-10 上海广电(集团)有限公司中央研究院 Huffman decoding method for quick extracting bit stream

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894556A (en) * 2010-05-26 2010-11-24 北京红旗胜利科技发展有限责任公司 Method and device for testing audio decoding program
CN105207677A (en) * 2015-09-01 2015-12-30 上海斐讯数据通信技术有限公司 Graphical coding and decoding system and method
CN105207677B (en) * 2015-09-01 2018-02-06 上海斐讯数据通信技术有限公司 A kind of graphical coding/decoding system and method

Also Published As

Publication number Publication date
CN101626242B (en) 2014-04-16

Similar Documents

Publication Publication Date Title
US10841584B2 (en) Method and apparatus for pyramid vector quantization de-indexing of audio/video sample vectors
US7433824B2 (en) Entropy coding by adapting coding between level and run-length/level modes
JP4963498B2 (en) Quantization of speech and audio coding parameters using partial information about atypical subsequences
US8665945B2 (en) Encoding method, decoding method, encoding device, decoding device, program, and recording medium
US20080262855A1 (en) Entropy coding by adapting coding between level and run length/level modes
NO341186B1 (en) Selective application using multiple entropy models in adaptive coding and decoding
JP6595687B2 (en) Encoding method, encoding device, program, and recording medium
US8576910B2 (en) Parameter selection method, parameter selection apparatus, program, and recording medium
JP2004258603A (en) Entropy encoding adapting encoding between level mode and run length/level mode
US7002494B2 (en) Low memory and MIPS efficient technique for decoding Huffman codes using multi-stage, multi-bits lookup at different levels
CN101626242B (en) Improved Huffman decoding method and device
US7739119B2 (en) Technique for implementing Huffman decoding
JP4918103B2 (en) Encoding method, decoding method, apparatus thereof, program, and recording medium
CN110771045B (en) Encoding device, decoding device, encoding method, decoding method, and recording medium
CN111788628A (en) Encoding device, encoding method, program, and recording medium
WO2010067800A1 (en) Encoding method, decoding method, encoding device, decoding device, program, and recording medium
CN101626243A (en) Improved Hofmann decoding method and device
KR100686354B1 (en) Huffman decoding method and device for using variable length tree
JP5714172B2 (en) Encoding apparatus, method, program, and recording medium
Singhai et al. MSVQ: A data compression technique for multimedia applications
WO2013129439A1 (en) Encoding device, encoding method, program and recording medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
DD01 Delivery of document by public notice

Addressee: Digital Wave Co., Ltd.

Document name: Notification that Application Deemed to be Withdrawn

C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP02 Change in the address of a patent holder

Address after: 100081. Office building 2, building 2, No. 1, Nongda South Road, Beijing, Haidian District, B-221-183

Patentee after: Digital Wave Co., Ltd.

Address before: 100031, 503/504 building, Capital Times Square, 88 West Chang'an Avenue, Beijing, Xicheng District

Patentee before: Digital Wave Co., Ltd.

ASS Succession or assignment of patent right

Owner name: SHENZHEN RISING SOURCE TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: DIGITAL WAVE (BEIJING) CO., LTD.

Effective date: 20140922

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100081 HAIDIAN, BEIJING TO: 518057 SHENZHEN, GUANGDONG PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20140922

Address after: 518057, room 9, 610 software building, Nanshan District hi tech, Guangdong, Shenzhen

Patentee after: Shenzhen Guangsheng Xinyuan Technology Co., Ltd.

Address before: 100081. Office building 2, building 2, No. 1, Nongda South Road, Beijing, Haidian District, B-221-183

Patentee before: Digital Wave Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220512

Address after: 510530 No. 10, Nanxiang 2nd Road, Science City, Luogang District, Guangzhou, Guangdong

Patentee after: Guangdong Guangsheng research and Development Institute Co.,Ltd.

Address before: 518057 room 610, software building, No.9, Gaoxin Zhongyi Road, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN RISING SOURCE TECHNOLOGY Co.,Ltd.