Nothing Special   »   [go: up one dir, main page]

Btech Cs 6 Sem Data Compression Kcs 064 2023

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Printed Pages:02 Sub Code:KCS-064

Paper Id: 236661 Roll No.

B.TECH
(SEM VI) THEORY EXAMINATION 2022-23
DATA COMPRESSION
Time: 3 Hours Total Marks: 100
Note: Attempt all Sections. If require any missing data; then choose suitably.

SECTION A

1. Attempt all questions in brief. 2 x 10 = 20


(a) What are the different measures of performance of data compression algorithm?
(b) Given an Alphabet A = {a1, a2, a3, a4}, find the first-order entropy in the following cases:
P(a1) = P(a2) = P(a3) = P(a4) = ¼
(c) What are the applications of Huffman Coding?
(d) Calculate Golomb Code for n=9 and n=13 with parameter m=5.
(e) List out the applications of dictionary based data compression techniques.
(f) Compare Binary code with Huffman code.
(g) What are the various distortion criteria?
(h) Explain mismatch effect.
(i) Define Code Vectors. 90

6
22
(j) What is the concept of pruning?
_2

3.
P1

11
SECTION B
3E

0.
2. Attempt any three of the following: 10x3=30

.2
P2

(a) How do uniquely decodable codes differ from prefix codes? Explain with example.
25
Describe the procedure for coding a uniquely decodable code. Are the following codes {0,
Q

|1
10, 110, 111} {1, 10, 110, 111} uniquely decodable?
(b) Explain the difference between Minimum variance Huffman code and Huffman code with
2

example. From an alphabet A = {a1, a2, a3, a4, a5} with probabilities P(a1) = 0.15, P(a2)
:2

= 0.04, P(a3) = 0.26, P(a4) = 0.05 and P(a5) = 0.50.


08

(i) Calculate the entropy of this source.


(ii) Find the Huffman Code for this source.
:
09

(iii) Find the average length of the code in (ii) and its redundancy.
(c) Explain the basic algorithm behind prediction with partial match (ppm).What is Facsimile
3

Encoding? Explain Run-Length Coding technique used earlier for Facsimile.


02

(d) What do you understand by Adaptive quantization? Explain the various approaches to
-2

Adapting the Quantizer Parameters.


06

(e) What do understand by Vector Quantization? Discuss the advantages of vector


quantization over scalar.
1-
|2

SECTION C
3. Attempt any one part of the following: 10x1=10
(a) Define two-state Markov model for binary images. Also define ignorance model and how
it is related with probability models?
(b) Discuss the importance of modeling and coding in the compression process with example.
How does information theory relate to lossless compression? Can you explain some key
concepts in information theory and their relevance to compression?

QP23EP1_290 | 21-06-2023 09:08:22 | 125.20.113.226


4. Attempt any one part of the following: 10x1=10
(a) Given an alphabet {a1, a2, … , a26} of size 26. The letter ak is encoded as per the following
rules:
(i) ak is encoded as 5 bit binary representation of k – 1, if 1 ≤k ≤20
(ii) ak is encoded as 4 bit binary representation of k-11 for all k >20.
Decode the following string using Adaptive Huffman procedure. Show updated Huffman
tree for each step.
000001010001000001100010110.
(b) How Rice code can be viewed? Explain the implementation of the Rice code in the
recommendation for lossless compression from the Consultative Committee on Space
Data Standard.

5. Attempt any one part of the following: 10x1=10


(a) A sequence is encoded using LZW algorithm and the initial dictionary shown in table

Index Entry
1 a
2 ¢
3 r
4 t
The output of the LZW encoder is the following sequence:
3 1 4 6 8 4 2 1 2 5 10 6 11 13 6
90

6
Decode this sequence. Discuss relative advantages of LZ77, LZ78 and

22
LZW Compression schemes.
_2
(b) Explain Burrows-Wheeler transform (BWT) in detail. Given the sequence: this¢is¢the

3.
P1

Encode using the BWT and move-to-front coding.

11
3E

0.
6. Attempt any one part of the following: 10x1=10

.2
P2

(a) What do you understand by Uniform Quantizer? How uniform quantization of a uniformly
25
distributed sources and uniform quantization of non-uniform sources is done?
Q

|1
(b) Consider the following lossy compression scheme for binary sequences. We divide the
binary sequence into blocks of size M. For each block we count the number of 0s.If this
2

number is greater than equal to M/2, we send as 0; otherwise, we send a 1.


:2

(i)If the sequence is random with P(0) = 0.8, compute the rate and distortion for M = 1, 2,
08

4, 8, 16.Compare your results with the rate distortion function for binary sources.
(ii)Repeat assuming that the output of the encoder is encoded at a rate equal to the entropy
:
09

of the output.
3
02

7. Attempt any one part of the following: 10x1=10


-2

(a) Explain the various functions involved in the Linde-Buzo-Gray(LBG) Algorithm. Also
06

explain the initializing of LBG algorithm.


(b) Write a short note on :
1-

(i) Tree structured Vector Quantizers.


|2

(ii) Lattice Vector Quantizers

QP23EP1_290 | 21-06-2023 09:08:22 | 125.20.113.226

You might also like