Detailed Description
[024] Preferred embodiments of the invention will be described hereinafter by way of the accompanying drawings. In the following description, functions or constructions that have become known in the art are not described in detail since they would obscure the description of the invention with unnecessary detail.
[025] A typical DRA audio encoder 10 is shown in fig. 1A, which may be implemented in hardware, software, and/or firmware. In short, the DRA standard is directed to techniques for signal processing of source sound (e.g., input PCM samples) in a number of technical modules for the purpose of "coding defects that are hardly audible" to compress the source sound. The above-mentioned multiple technical modules include but are not limited to: transient analysis module 20, multi-resolution filterbank module 22, linear scalar quantization module 30, quantization index coding module 32, codebook selection module 34, human ear auditory model module 40, global bit allocation module 42, and multiplexing module 50. According to the relevant provisions of the DRA standard, the technical module is a necessary module, namely, the DRA output code stream (namely, the DRA standard code stream) which accords with the standard is the code stream processed by the module. The above modules can be functionally divided into four groups, namely a multi-resolution analysis group (comprising a transient analysis module 20, a multi-resolution filter bank module 22), a quantization group (comprising a linear scalar quantization module 30, a quantization index coding module 32, a codebook selection module 34), a psychoacoustic model group (comprising a human auditory model module 40, a global bit allocation module 42), and a MUX group (multiplexing module 50). The invention is primarily concerned with the quantization groups described above.
[026] In fig. 1B, a DRA audio decoder 110 is shown that decodes a DRA encoded code stream to obtain a DRA decoded signal (i.e., a PCM sample output). The flow associated with the DRA huffman decoding method will be discussed below in conjunction with fig. 1B: firstly, receiving a DRA code stream at a multi-path demultiplexing module 150, and extracting control information and data information in the DRA code stream; then, the codebook selection information is transmitted to the codebook selection module 134, and the quantization index module 132 and the quantization unit number module are controlled by the codebook selection module; the quantization index module 132 decodes the data from the demultiplexing module 150 and the control information from the codebook selection module 134 to obtain a quantization index; finally, the inverse quantization module 130 inversely quantizes the decoded quantization index based on the decoded quantization index and the control information provided by the quantization unit number module.
The first embodiment: optimized Huffman decoding techniques
[027] The improved huffman decoding method 1000 according to the first embodiment of the present invention will be explained in detail below with reference to fig. 2, and may be used not only in DRA audio codec systems, but also in other audio and/or video codec systems, and is therefore referred to as "general improved huffman decoding method" in the following.
[028] For ease of discussion, 1 huffman codebook to be decoded is given by way of example (shown in table 1), which can be used in any known audio-video codec system. The codebook is arranged into 2 levels, each level of node consists of three variables, and the general formula of the first level of node is marked as Structure1(X, Determin1, Determin 2); the second level node is denoted Structure2(X, Determin1, Determin 2). For example, corresponding to the first level node numbered 2 (i.e., node (1, 3, 0)), X is 1, derrmin 1 is 3, and derrmin 2 is 0. For another example, corresponding to the 3 rd second level node numbered 4 (i.e., node (5, 23, 4)), X ═ 5, derrmin 1 ═ 23, and derrmin 2 ═ 4.
Table 1 huffman codebook example (parallel)
Numbering |
First level node |
Second level node |
0 |
(2,3,0) |
|
1 |
(2,3,0) |
|
2 |
(1,3,0) |
|
3 |
(1,3,0) |
|
4 |
(16,0,12) |
(1,10,1)(0,15,2)(5,23,4)(13,22,5)(9,25,5)(12,26,5)(15,31,5)(29,24,6)(16,28,6)(28,29,6)(35,27,7)(34,30,7) |
5 |
(0,4,0) |
|
6 |
(28,0,6) |
(3,12,2)(1,13,2)(0,14,2)(4,18,3)(10,20,4)(11,21,4) |
7 |
(4,4,0) |
|
8 |
(9,4,0) |
|
9 |
(8,4,0) |
|
10 |
(7,4,0) |
|
11 |
(34,0,4) |
(1,11,1)(0,16,2)(2,17,3)(3,19,3) |
12 |
(3,3,0) |
|
13 |
(3,3,0) |
|
14 |
(5,4,0) |
|
15 |
(6,4,0) |
|
[029] All nodes of table 1 are divided into two groups (i.e., two columns) in parallel to represent, and thus, the huffman codebook shown in table 1 may be referred to as a parallel huffman codebook. Accordingly, there is also a serial huffman codebook arrangement method in which all nodes are sequentially arranged in a column (i.e., grouped) and a first-stage node is ahead and a second-stage node is sequentially arranged behind the first-stage node. A serial huffman codebook corresponding to the parallel huffman codebook shown in table 1 is shown in table 2.
Table 2 huffman code book example (serial)
[030] The arrangement of table 2 is briefly described with reference to table 1: the nodes numbered 0-15 in table 2 correspond one-to-one to the first level nodes numbered 0-15 in table 1; the nodes numbered 16-27 in table 2 correspond one-to-one to the 12 second level nodes numbered 4 in table 1; nodes numbered 28-33 in table 2 correspond one-to-one to 6 second level nodes numbered 6 in table 1; nodes numbered 34-37 in table 2 correspond one-to-one to the 4 second level nodes numbered 11 in table 1.
[031] The general improved huffman decoding method 1000 will be described in detail below in conjunction with tables 1-2 and fig. 2. As shown in FIG. 2, a general improved Huffman decoding method 1000 begins at step 1100, followed by reading in x-bit data (x ∈ N, and 4 ≦ x ≦ 8, and x is fixed to 4 in the first embodiment) from the data stream to be decoded in step 1101. Then, in step 1102, this x-bit data in binary representation is converted to a decimal number with this value as the linear index of the first level codebook in table 1. Next, in step 1103, look-up table 1 according to the linear index to obtain a search unit in the first-level codebook. And finishing the first-level search and starting to judge for the first time.
[032] Then, in step 1104, it is determined whether the Determin2 for this search unit is zero: if Determin2 is equal to 0, it indicates that Structure1 is a leaf node, the concrete form of Structure1 becomes (symbol, bit _ used1, 0), and the decoding process goes to step 1110, outputs symbol as decoded data, and outputs bit _ used1 as the number of bits of a Huffman codeword, and then the decoding process goes to step 1109 to end. If Determin2 is not 0, then Structure1 is declared to be the root node, the concrete form of Structure1 becomes (jump _ address, Determin1, num _ of _ subentries), and the decoding process goes to step 1105, calculating the starting position orig (i.e., the index position in Table 2) of the second level Huffman decoding search. Specifically, according to three variable values of Structure1 in the case that detemmin 2 is not 0, orig, i.e., jump _ address, of the starting position of the second-stage huffman decoding search can be obtained, and according to the value lookup table 2 of orig, the initial second-stage huffman node Structure2 can be obtained in step 1105; and the third variable num _ of _ branches of Structure1 indicates the total number of corresponding second level huffman nodes, which is also referred to as the maximum search depth (max _ depth). Finally, still in step 1105, the cycle variable i is initialized to orig. At this point, the first determination is completed and a second search is initiated (step 1106).
[033] In the second level search, the second level huffman node Structure2 is a leaf node, which may be represented in a specific form as Structure2(codeword, symbol, bit _ used 2). In step 1106, bit _ used2(i) bits of binary data are read in and converted into a codeword to be compared, C _ cw, where bit _ used2(i) represents the value of bit _ used2 for the node numbered i in Table 2. Next, in step 1107, C _ cw is compared with the codeword (i) whether it is equal, where codeword (i) represents the codeword value of the codeword numbered i in table 2. If C _ cw ═ codeword (i), go to step 1108; otherwise, add 1 to i and proceed to step 1106-1107. In practice, step 1107 is repeated no more than the maximum search depth (max _ depth). In step 1108, symbol corresponding to the ith node is output as decoded data, and bit _ used2+ x is output as the number of bits of the huffman codeword. The decoding process then proceeds to step 1109 where it ends.
[034] In the following, the first-level huffman nodes are the leaf nodes and the root nodes, and the example is described with reference to tables 1 to 2.
[035]For the case where the first-level huffman node is a leaf node, for example, 4 bits 0001 read in step 11012(in this context, XYZ2Data XYZ representing a binary representation) indicates: if the index of the first level codebook is 1, look-up table 1 in step 1103 to obtain node (2, 3, 0). In step 1104, it is determined that detemmin 2 is 0, so that node (2, 3, 0) is a leaf node, 2 is output as decoded data in step 1110, 3 is output as the number of bits of the huffman codeword, and then the decoding process goes to step 1109 to end.
[036]For the case where the first-level huffman node is the root node, for example, 4 bits 0110 read in at step 11012Indicating that: the first level codebook index is 6, then look-up 1 in step 1103 results in a node of (28, 0,6). In step 1104, it is determined that Determin2 is not 0, and therefore node (28, 0, 6) is the root node. Further, in step 1105, it is calculated: the second level search begins with node number 28(orig 28) of table 2; and the total number of second level nodes corresponding to nodes (28, 0, 6) is 6 (i.e., corresponding to nodes numbered 28-33 in table 2); the loop variable is initialized to 28. Then, the binary data 10 of 2 bits is read in again, bit _ used2(28)2And converts it to the parity codeword C _ cw ═ 2. In step 1107, it can be determined that C _ cw is 2 ≠ codeword (28) ≠ 3, so that when i is increased by 1 (i.e. i is 29) and step 1106 ≠ 1107 … … is repeated until the 4 th repetition, bit _ used2(31) ≠ 3, and the read-in 3-bit data is 1002The corresponding C _ cw is 4, and codeword (31) is 4 — C _ cw, indicating that a eligible node (i.e., node numbered 31 in table 2) has been found. At this time, the decoding process proceeds to step 1108, where symbol _ 18 corresponding to the node numbered 31 in table 2 is output as decoded data, and bit _ used2+ x + 3+4 is output as the number of bits of the huffman codeword. The decoding process then proceeds to step 1109 where it ends.
[037] The improved huffman decoding apparatus 2000 according to the first embodiment of the present invention will be described in detail below with reference to fig. 3, and may be used not only in DRA audio codec systems but also in other audio and/or video codec systems, and will be referred to as "general improved huffman decoding apparatus" hereinafter.
[038] The huffman code book to be decoded is still scheduled to be 2 levels, as described earlier, each level of nodes is composed of three variables, and the general formula of the first level of nodes can be represented as Structure1(X, detemmin 1, detemmin 2) and the second level of nodes can be represented as Structure2(X, detemmin 1, detemmin 2). The specific arrangement of the codebooks is as described in tables 1-2.
[039] The general improved huffman decoding apparatus 2000 is described in detail below with reference to tables 1-2 and fig. 2. As shown in fig. 3, a general improved huffman decoding apparatus 2000 comprises: a buffer module 2101 capable of reading x-bit data (x belongs to N, x is more than or equal to 4 and less than or equal to 8, and x is still fixed to 4) from the code stream and outputting the data; a linear index conversion module 2102 for receiving the output x bits, converting the x bits into decimal numbers, and outputting the decimal numbers as linear indexes of the first-level codebook in table 1; receive the linear index and look up table 1 that is outputted above-mentioned, and then output the search module 2103 of the corresponding search unit in the first level codebook. The functions of the three modules 2101 and 2103 are as follows: a first level search is performed.
[040] The general modified huffman decoding apparatus 2000 further comprises: and a judging module 2104 for receiving the output of the searching unit, judging whether the detemin 2 of the searching unit is zero, and selecting the output. The judgment module 2104 shown in fig. 3 is a "one-out-two-input-one-out" module, that is, the module selects one of two output paths to output according to the judgment result. Specifically, if the detemmin 2 is equal to 0, the determination module 2104 decides that the concrete form of Structure1 is (symbol, bit _ used1, 0), and outputs symbol as decoded data and bit _ used1 as the number of bits of a huffman codeword to the first result output module 2110, and then outputs the huffman-decoded data from the first result output module 2110. On the contrary, if the destinm 2 is not 0, the judging module 2104 decides that the concrete form of Structure1 is (jump _ address, destinm 1, num _ of _ subentries), and outputs jump _ address, num _ of _ subentries to the second-stage huffman decoding start position calculating module 2105. The second-stage huffman decoding initial position calculating module 2105 calculates the initial position orig of the second-stage huffman decoding search according to the formula orig as jump _ address, and obtains an initial second-stage huffman node Structure2 according to the value table lookup 2; and module 2105 takes the value of num _ of _ subries as the total number of corresponding second level huffman nodes and the maximum search depth (max _ depth). Finally, in block 2105, the round-robin variable i is initialized to equal orig and the three quantities Structure2, max _ depth, and i are finally output. The modules 2104 and 2110 or 2104 and 2105 have the following functions: a first decision is made and a second level search is ready to begin (for the case of branch of block 2105).
[041] For the case of the branch of module 2105, the general improved huffman decoding apparatus 2000 further comprises: the loop and codeword comparison module 2106 receives the outputted Structure2, max _ depth and i, and determines that the specific data storage format of the Structure2 is Structure2 (coded, symbol, bit _ used 2). The loop and codeword comparison module 2106 then reads in the binary data of bit _ used2(i) bits and converts it into the codeword to be compared C _ cw. Then, in the loop and code word comparing module 2106, whether the corresponding code words codeword (i) corresponding to C _ cw and i are equal or not is compared. If C _ cw is coded (i), the loop and codeword comparison module 2106 outputs symbol in the initial second-stage huffman node Structure2 as decoded data, and bit _ used2 in the initial second-stage huffman node Structure2 as bits of huffman codeword to the second result output module 2108, and then outputs the huffman decoded data from the second result output module 2108. On the contrary, if it is determined in the loop and codeword comparison module 2106 that C _ cw ≠ codeword (i), then module 2106 adds 1 to i, continues to search for the ith node in table 2, and repeats the step of reading bit _ used2(i) - > converting into the codeword to be compared C _ cw- > determining whether C _ cw and codeword (i) are equal, until C _ cw ≠ codeword (i). When a required node is found by loop search (i.e., a suitable i is determined), the loop and codeword comparison module 2106 outputs symbol in the second stage huffman node numbered i in table 2 as decoded data, and the bit _ used2 value plus x in the node as the number of huffman codeword bits to the second result output module 2108, and then outputs the huffman decoded data from the second result output module 2108.
[042] Preferably, an error correction module (not shown) may be added to the loop and codeword comparison module 2106, which detects the number of loops in the loop and codeword comparison module 2106, and when the number is equal to the value of max _ depth, the error correction module reports an error.
[043] Compared with the first-level Huffman decoding method and device in the prior art, the Huffman decoding method and device according to the first embodiment of the invention can enable codebooks with high occurrence probability to be searched faster, and therefore decoding efficiency is improved. Further, in the second-stage node of the huffman codebook used in the method for huffman decoding according to the first embodiment of the present invention, bit _ used2 monotonically rises, while bit _ used2 inversely correlates with the probability of occurrence of a codeword, so that a high probability of a codeword can be decoded more quickly.
[044] It can be understood by those skilled in the art that the huffman codebook grading scheme according to the first embodiment of the present invention is preferably two-level, regardless of whether the general modified huffman decoding method 1000 or the general modified huffman decoding apparatus 2000. But is not limited thereto, and more than two stages of the modified huffman codebook may be constructed according to the codebook characteristics. The number x of data bits to be read first in the encoded code stream is not limited to 4, and may be fixedly selected to be 5, 6, 7, or 8.
Second embodiment: improved DRA huffman decoding technique
[045] The following embodiments are directed to features of DRA technology, and further improve it on the basis of "general improved huffman decoding method", herein referred to as "improved DRA huffman decoding method".
[046] First, in conjunction with the description of fig. 1A-1B and in accordance with the DRA standard, DRA huffman decoding is primarily used to: decoding transient segment lengths (see section 5.4.3 of the DRA standard), decoding codebook application ranges (see section 5.5.2 table 20 of the DRA standard), decoding codebook indices (see section 5.5.3 table 21 of the DRA standard), decoding quantization indices for subband samples based on selected codebook indices (see section 22-23 of the DRA standard 5.6), and decoding quantization step size indices (see section 5.7 table 25 of the DRA standard). In addition, the quantization step size can be obtained based on the quantization step size index through DRA standard attached table B.1. Finally, based on the quantization index and the quantization step size, subband samples are obtained in an inverse quantization module 130 (see fig. 1B) (see section 6.4 of the DRA standard). In the DRA standard, 27 huffman code books Hufftab01-Hufftab27 are provided for each decoding requirement in common. Based on the respective characteristics of each codebook, the huffman decoding method described above can be further optimized.
[047] For convenience of description, only the calculation results for 27 huffman codebooks of DRA are given herein (see table 3), and specific codebook data can be seen in the corresponding part of the DRA standard appendix. As will be appreciated by those skilled in the art, optimization of DRA huffman decoding can be achieved by reading the description of the present embodiment in conjunction with the actual 27 DRA huffman code books.
[048] In particular, fig. 4 shows a modified DRA huffman decoding method 3000 according to a second embodiment of the invention. The method is substantially the same as the general modified huffman decoding method 1000 described above in connection with fig. 2 (steps 3101-. The x-value calculation step 3101' related to step 3101A will be explained in detail later with reference to fig. 6.
[049] In addition, an improved DRA huffman decoding apparatus 4000 is also shown in fig. 5. The decoding apparatus is substantially the same as the general improved huffman decoding apparatus 2000 described above in connection with fig. 3 (the blocks 4101-. In the initial bit reading module 4101A, the value of x is read from the encoded code stream, and the operation of the module 4101 is guided by the value of x. The x-value calculation module 4101' associated with the module 4101A will be explained in detail later in conjunction with fig. 7.
[050] In order to further optimize huffman decoding for DRA codebook characteristics, a trade-off needs to be made between the number x of search bits required at the first level and the maximum search depth (max _ depth) at the second level according to the codebook characteristics: in the case where the total number of terms of the two-stage codebook corresponding to x is not increased much (e.g., not more than a certain multiple of the total number of terms of the two-stage decoding codebook in the case where x is 4), an inflection point of the second-stage maximum search depth is found. Finally, the following purposes can be further achieved: under the condition that the total number of code book entries is not increased too much, the Huffman decoding speed is improved as much as possible.
[051] Step 3101 'and module 4101' are described in detail below in conjunction with fig. 6 and 7, respectively. Turning first to fig. 6, step 3101', at the encoding end, for calculating and storing x in a DRA encoded code stream, will first be described in detail. As shown in fig. 6, in step 3b, the value of the loop variable x for a certain huffman codebook (e.g., the y-th codebook corresponds to data of the codebook number # Hufftaby row in table 3) is initialized to 4; then in step 3c it is determined whether the size value at which the first stage reads in m bits (size (m), e.g. the corresponding size value in row tab _ mx. dat in table 3) is greater than T1 times the size value at which the first stage reads in 4 bits (i.e. size (4), e.g. the corresponding size value in row tab _4x. dat in table 3). Among them, T1 is the first threshold, and is preferably 1.5 to 2, more preferably 1.8 to 2, and still more preferably 1.9 to 2. If the judgment in the step 3c is yes, directly entering the step 3d, and setting x to be 4; the value of x is then stored in the DRA code stream in step 3h, and step 3101' ends.
[052] If the determination in step 3c is no, proceed to step 3e for a further determination, i.e. to determine whether m is equal to 8. If the judgment in the step 3e is yes, directly entering the step 3d, and setting x to be 4; the value of x is then stored in the DRA code stream in step 3h, and step 3101' ends. If the judgment in the step 3e is negative, the third judgment step 3f is entered, i.e., whether (MD (m) -MD (m +1))/MD (m)) is greater than T2. Wherein md (m) represents the maximum search depth when m bits are read in the first step (as indicated by the corresponding max depth value in row labeled tab _ mx. dat in table 3); t2 is the second threshold value, and is preferably 0.2 to 0.7, more preferably 0.2 to 0.5, and further preferably 0.25 to 0.3.
[053] If the judgment in the step 3f is yes, entering the step 3g, and setting x as m + 1; the value of x is then stored in the DRA code stream in step 3h, and step 3101' ends. And if the judgment in the step 3f is negative, adding 1 to m, returning to the step 3c, and continuing to perform the next round of judgment.
[054] Turning next to fig. 7, a block 4101' at the encoding end for computing and storing x in a DRA encoded code stream is detailed. As shown in fig. 7, the assignment module 4b first initializes the value of the loop variable m for a certain huffman codebook (e.g., the y-th codebook corresponds to the data of the codebook number # Hufftab y row in table 3) to 4; then the first determining module 4c is used to determine whether the size (m) when the first level reads m bits is larger than T1 times of the size (4) when the first level reads 4 bits. Among them, T1 is the first threshold, and is preferably 1.5 to 2, more preferably 1.8 to 2, and still more preferably 1.9 to 2. If the determination in block 4c is yes, a first x value determination block 4d is entered via its first path, which sets x to 4; the value of x is then stored in the DRA code stream in the storage module 4h, and the processing of the entire module 4101' is ended.
[055] If the determination in block 4c is negative, it is determined whether m equals 8, via its second path, into a second determination block 4 e. If the determination in block 4e is yes, a first x value determination block 4d is entered via its first path, which sets x to 4; the value of x is then stored in the DRA code stream in the storage module 4h, and the processing of the entire module 4101' is ended. If the determination in block 4e is negative, it proceeds via its second path to a third determination block 4f, which determines whether (MD (m) -MD (m +1))/MD (m) is greater than T2. Wherein md (m) represents the maximum search depth when m bits are read in the first step (as indicated by the corresponding max depth value in row labeled tab _ mx. dat in table 3); t2 is the second threshold value, and is preferably 0.2 to 0.7, more preferably 0.2 to 0.5, and further preferably 0.25 to 0.3.
[056] If the judgment in the module 4f is yes, directly entering a second x value determination module 4g, which sets x to m + 1; the value of x is then stored in the DRA code stream in the storage module 4h, and the processing of the entire module 4101' is ended. Otherwise, add 1 to m, and go to the first determination module 4c to perform the next round of determination again.
Table 3 analysis for 27 huffman codebooks
The third embodiment: improved DRA huffman decoding technique
[057] The following embodiments address features of the DRA technique and detail yet another improved DRA huffman decoding technique. The 27 codebooks targeted in this embodiment are identical to the 27 DRA huffman codebooks described in the second embodiment in connection with fig. 1A-1B and the DRA standard.
[058] In the two-stage Huffman decoding process, the number x of the first-stage search bits is positively correlated with the size of the total item number of the two-stage codebook, and is negatively correlated with the maximum depth (max _ depth) of the second-stage decoding. The optimized two-level codebook requires a trade-off between the total number of entries and the maximum depth, resulting in a preferred value of x. An integrated cost function may thus be defined which is used in the modified huffman decoding method 5000 (not shown) and the modified huffman decoding apparatus 6000 (not shown), respectively.
[059] The detailed definition of the above-mentioned composite cost function is:
cost(x)=α*max_depth(x)+log2(size(x))*log2(size(0))
wherein α is a weight parameter; x represents the bit number read initially at the first stage of the decoding end; max _ depth (x), size (x) indicate the values corresponding to max _ depth and size in row tab _ xx.dat in table 3; size (0) represents the number of linear codebook entries.
[060]In the improved huffman decoding method 5000, firstly, a value is given to alpha within a certain range, wherein the range is preferably 0.4-1.6, more preferably 0.6-1.4, and further preferably 0.8-1.2; then, cost corresponding to different x values of the y-th codebook (i.e., one of the 27 codebooks mentioned in the DRA algorithm) is calculated respectivelyy(x) The value of (a) is a code book number, and the value range is from 1 to 27; next, record each costy(x) Value x of x at minimumyMin(ii) a Finally, corresponding to the y-th DRA codebook, outputting xyMinA value by which to subsequently direct the decoding of the y-th DRA codebook (i.e., xyMinAs the first stage initial read-in number of bits).
[061]In the modified huffman decoding apparatus 6000, comprising: the assignment module is used for assigning a value to alpha, and the range is preferably 0.4-1.6, more preferably 0.6-1.4, and further preferably 0.8-1.2; a calculating module, configured to calculate cost of the yth DRA codebook (i.e., one of the 27 codebooks mentioned in the DRA algorithm) corresponding to different x values respectivelyy(x) The value of (a) is a code book number, and the value range is from 1 to 27; a recording module for recording each costy(x) Value x of x at minimumyMin(ii) a An output module for outputting x corresponding to the yth DRA codebookyMinA value by which to subsequently direct the decoding of the y-th DRA codebook (i.e., xyMinAs the first stage initial read-in number of bits).
[062] In the improved huffman decoding method 5000 and/or the improved huffman decoding device 6000, a larger alpha value means that the codebook corresponding to x is selected to have a smaller max _ depth and a larger size, i.e. a lower decoding speed and a larger codebook storage space; conversely, a smaller value of α means that the selected codebook for x has a larger max _ depth and a smaller size, i.e. a lower decoding speed and a smaller codebook storage space.
[063]According to one example of the present invention, when α ≦ 1.0 and 4 ≦ m ≦ 8, 27 of Table 3Cost corresponding to code booky(x) And xyMinAs shown in table 4.
TABLE 4 selection of first search bit number m
Number book (y) |
size(0) |
costy(4) |
costy(5) |
costy(6) |
costy(7) |
costy(8) |
xyMin |
1 |
7 |
11.23 |
14.04 |
16.84 |
19.65 |
22.46 |
4 |
2 |
64 |
62.02 |
62.35 |
53.92 |
56.66 |
60.23 |
6 |
3 |
32 |
38.24 |
39.5 |
41.52 |
40.65 |
44.25 |
4 |
4 |
18 |
28.26 |
29.34 |
31.56 |
34.42 |
37.45 |
4 |
5 |
18 |
31.26 |
32.49 |
34.81 |
37.55 |
40.52 |
4 |
6 |
116 |
75.69 |
74.61 |
72.28 |
71.32 |
72.67 |
7 |
7 |
116 |
73.85 |
73.38 |
74.03 |
75.04 |
77.41 |
5 |
8 |
16 |
25.58 |
26.99 |
29.43 |
32.18 |
35.07 |
4 |
9 |
16 |
31.23 |
32.71 |
34.84 |
37.39 |
40.18 |
4 |
10 |
81 |
60.16 |
53.48 |
53.93 |
55.29 |
55.57 |
5 |
11 |
25 |
28.82 |
29.35 |
30.27 |
32.51 |
37.15 |
4 |
12 |
81 |
62.36 |
54.12 |
51.86 |
53.08 |
55.7 |
6 |
13 |
289 |
102.43 |
92.85 |
86.47 |
82.81 |
82.18 |
8 |
14 |
31 |
32.37 |
32.96 |
34.76 |
36.9 |
39.63 |
4 |
15 |
63 |
47.12 |
47.21 |
45.46 |
47.55 |
51.4 |
6 |
16 |
127 |
71.68 |
64.38 |
60.02 |
59.47 |
62.13 |
7 |
17 |
255 |
102.44 |
93.91 |
83.94 |
77.76 |
78.74 |
7 |
18 |
256 |
106.61 |
87.99 |
80.83 |
76.64 |
76.69 |
7 |
19 |
81 |
60.16 |
54.48 |
52.8 |
54.13 |
56.98 |
6 |
20 |
25 |
30.01 |
30.5 |
32.27 |
34.71 |
37.15 |
4 |
21 |
81 |
59.36 |
55.39 |
55.31 |
57.24 |
57.86 |
6 |
22 |
289 |
157.35 |
147.81 |
117.64 |
106.18 |
97.61 |
8 |
23 |
31 |
34.54 |
33.51 |
35.05 |
37.95 |
41.69 |
5 |
24 |
63 |
48.12 |
46.21 |
45.53 |
47.49 |
51.31 |
6 |
25 |
127 |
76.75 |
72.38 |
66.78 |
62.38 |
63.03 |
7 |
26 |
255 |
128.53 |
97.11 |
82.05 |
75.4 |
73.1 |
8 |
27 |
256 |
96.7 |
81.24 |
74.28 |
71.54 |
71.65 |
7 |
[064] It will be appreciated by those skilled in the art that the second and third embodiments described above are not limited to DRA huffman codebooks. After reading the description and claims of the present invention, those skilled in the art can use the optimization methods and apparatuses for DRA huffman codebooks in the second and third embodiments to optimize the first-stage bit number of a general two-stage huffman codebook.
Simulation result
[065] The inventor of the application compares the performances of the three improved Huffman decoding methods provided by the application with the performances of the existing linear search Huffman decoding method in a fixed-point 16-bit digital signal processor Blackfin-533. Both huffman decoding programs are in standard C language, do not contain platform-specific optimization, and run embedded in the actual DRA audio decoder. Although the verification platform is Blackfin-533, the program itself may run on a general purpose PC as well as most embedded processors and digital signal processors. The data given in tables 4-1 to 4-3 is the processor clock cycle consumption for decoding the Huffman decoding part of the DRA code stream of 1 second, wherein the sampling rate of the DRA code stream is 48kHz, and the code rate is 128 kbps.
TABLE 4-1 comparative test for the first embodiment of the present invention
[066] For the case of the first embodiment, the number of codebook entries 3112 after modification, the number of primitive entries 2819, i.e., the number of entries, is increased by about 10%.
TABLE 4-2 comparative tests on the second embodiment of the present invention
[067] For the case of the second embodiment, the number of codebook entries 4266 is modified, and the number of primitive entries 2819, i.e., the number of entries, is increased by about 51%.
Tables 4-3 comparative tests on the third embodiment of the present invention
[068] For the case of the third embodiment, the number of codebook terms is modified 4324, and the number of primitive terms 2819, i.e., terms, is increased by about 53%.
[069] The following conclusions can be drawn from tables 4-1 to 4-3: under the condition of no compiler optimization, the optimized Huffman decoding module consumes 1/6-1/7 of the clock period consumption of a processor per second, which is consumed correspondingly by linear search, namely has a speed-up ratio of 6-7 times; under the condition of compiler optimization, the consumption of the Huffman decoding module per second of processor clock cycles is only 1/4-1/5 of the consumption of linear search, namely the acceleration ratio is 4-5 times. Therefore, the method and the device have a remarkable optimization effect on the Huffman decoding module.
[070] While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood by those skilled in the art that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. As can be appreciated by those skilled in the art: many variations and/or modifications may be made to the invention as shown in the specific embodiments without departing from the spirit or scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.