C2. Fixed Point and Floating Point Operations
C2. Fixed Point and Floating Point Operations
C2. Fixed Point and Floating Point Operations
(1-1) to (1-58)
(2-1) to (2 -88)
Chapter - 4
(3-1) to (3 -60)
Pipelining
(4-1) to (4 -26)
(5-1) to (5 -114)
(6-1) to (6 -140)
(A -1) to (A -2)
Appendix -A Proofs
Features of Book
;* Use of clear, plain and lucid language making the understanding very easy.
!I illustrations. !* Neat and to the scale diagrams for easier understanding of concepts.
mmmm
Syllabus
(Chapten - 1,2)
Functional units Basic operational concepts - Bus structures - Performance and metrics Instructions and instruction sequencing - Hardware software interface - Instruction set
architecture - Addressing modes - RISC - CISC. ALU design - Fixed point and floating point
operations.
%
(Chapter - 3)
3. Pipelining
(Chapter 4)
Basic concepts Data hazards Instruction hazards Influence on instruction sets Data path
and control considerations - Performance considerations - Exception handling.
4. Memory System
(Chapter 5)
Basic concepts - Semiconductor RAM - ROM - Speed - Size and cost - Cache memories
Improving cache performance Virtual memory Memory management requirements
Associative memories Secondary storage devices.
5. I/O Organization
(Chapter
-0
-
!If 'ilHlimm
1.1 Introduction
1.2 Computer Types
1.3 Functional Units
1-2
1-4
1-5
1-5
1-6
1-7
1-7
1.7 Performance
1.7.1 Processor Clock
1.72 CPU Time
1.7.3 Performance Metrics
1.7.3.1 Hardware Software Interface
1.7.32 Oder Performance Measures
1.7.4 Performance Measurement
1.8 Instructions and Instruction Sequencing
1.8.1 Register Transfer Notation
1.8.2 Assembly Language Notation
1.8.3 Basic Instruction Types
13.3.1 Three Address Instructions
18.32 Two Address Instructions
18.33 One Address Instnxtcn
1 8.3.4 Zero Address Instructions
1.8.4 Instruction Execution and Straight-Line Sequencing
1-7
1-11
1-13
1-14
1-15
1-19
1-21
1-21
1-22
t 22
1 24
1-26
1 29
1-32
1-32
1-33
1-33
1-33
1-34
1-34
1-35
1-37
1-39
1-40
1.8.5 Branching
1.8.6 Conditional Codes
1.8.7 Generating Memory Addresses
1 40
1-42
1-43
1 - 44
1 -44
1-45
1-45
1-47
1-47
1 - 49
1-50
1-48
(/wioiz-w)
2.1 Introduction
2-1
2 1
2-7
22.1.1 Hall-Adder
2-7
22.12Ful-Adder .
2-8
2-10
2-11
2-12
2-12
2-14
2-15
2-19
2-22
2-22
2-29
2-29
2-31
2-31
2-35
2-38
2-39
2-40
2-47
2-47
2-47
2-48
2-50
2.84.3 Flowchart and Algorithm tor Floating Point Addition and Subtraction
2-51
2-53
2-54
2-57
2-59
2-59
2-63
2-64
2-67.
2 - 70
2 76
2-77
Review Questions
_ (3-1)to(3-60)
3.1 Introduction
3 1
3 1
3-3
3-6
3-6
3-9
3-10
3-11
3 12
3 14
3-16
3-18
3-21
3-22
3-28
3-29
3 - 30
2.1 Introduction
This chapter explains the addition and subtraction of signed numbers and
implementation of their circuits. We will also see the design of fast adders. Multiplication
algorithms for unsigned and signed numbers are covered in this chapter. Then a technique
which works equally well for both positive and negative multiplier called 'Booth algorithm
is explained. Then the algorithms for integer division are explained. The chapter discussed
floating point numbers and basic arithmetic operations on them at the end.
Based on the number system two basic data types are implemented in the computer
system : fixed point numbers and floating point numbers. Representing numbers in such
data types is commonly known as fixed point representation and floating point
representation, respectively.
In binary number system, a number can be represented as an integer or a fraction.
Depending on the design, the hardware can interpret number as an integer or fraction. The
radix point is never explicitly specified. It is implicited in the design and the hardware
interprets it accordingly. In integer numbers, radix point is fixed and assumed to be to the
right of the right most digit As radix point is fixed, the number system is referred to as
fixed point number system.
With fixed point number system we can represent positive or negative integer
numbers. However, floating point number system allows the representation of numbers
having both integer part and fractional part.
(2-1)
2-2
Solution :
- number
- l's complement
Solution :
10111001 number
01000110 l's complement
The 2's complement is the binary number that results when we add 1 to the l's
complement. It is given as
2's complement
l's complement + 1
number
l's complement
0 111
2's complement
Solution :
of
0011 number
0101 1100 l's complement
1_
+
1010
Let us see the subtraction of binary number using l's complement and 2s complement
number representations.
2-3
28
14
LSD
LSD
MSD
MSD
111
Cany
111
0
(28)10
(15)ro
(43),o
Note : Here, the magnitude of greater number is 5-bit; however, the magnitude of the
result is 6-bit. Therefore, the numbers arc sign-extended to 7-bits.
(01111), - (15) 10
(10000),
* ls complement of 15
- 15 :
Addition of 28 and
2-4
11
Sign extension
1..
01
(28), o
(-15),o
10
10
0
Ceity
110
(15),o
We have (OlllOO)j
(28)10 and
(Ollll)j
(15) ,0
(lOOOll)j
<-28)10
Sign
(-28),o
.0
(15>I0
110
10
extension
Result Is In 1l
(-13),0
complement form
Verification :
->
(13))t|
(01111),
(15) ,o
(10000) j - ls complement of 15
(100011)2 * ls complement of 28
Addition of (- 28) and (- 15) :
Sign-extension
- 'vi
Sign-extension
Cany
[7]
mH
1
I -1
0
0
10
11
(-43) Result is in 1%
complement form
Verification :
2-5
->
(43),0
Note :
Here, the magnitude of greater number is 5-bit; however, the magnitude of the
result is 6-bit. Therefore, the numbers are sign-extended to 7-bits.
For proper result we suggest to use 1 sign-bit extension to the number having
greater magnitude and represent the number having smaller magnitude with
extended number of bits.
Binary Arithmetic
(01111)2
(15),
Addition of 28 and 15
m
*
Sign-extension
I'Oj
Sign-axtensioo
Sign
(28ho
111
(15),c
(43),0
10
10
(28)10 and
(01111)] - (15),
(10001)]
Addition of 28 and
Sign-axtansion
2's complement of 15
15
(28),#
( 1S),o
(13),#
Sign-axtansion
*
Ignore carry
- M
[X.J 0
(01111)] -(15)10
(100100)]
2's complement of 28
2 6
Sign-extension
-*
Sign-extension
-*
11
Verification :
10
(-28),o
01111
(15),0
(-13)
Result it In 2 a
complement form
0
1
-*
(13|10
(15),
Addition of
Sign-extension
2's complement of IS
- 28 and - 15
1'
-*
1:1
(-28)
Sign-extension
Ignore carry -*
10
Verification :
(- 15)
10
10
(-43)
10
10
10
10
10
10
Result Is In 7s
complement form
-*
(43)
2-7
2.2.1 Adders
Digital computers perform various arithmetic operations. The most basic operation,
no doubt, is the addition of two binary digits. This simple addition consists of four
possible elementary operations, namely,
0+0 = 0
0 + 1-1
1 + 0=1
1 + 1 lOj
The first three operations produce a sum whose length is one digit, but when the
last operation is performed sum is two digits. The higher significant bit of this result
is called a carry, and lower significant bit is called sum. The logic circuit which
performs this operation is called a half-adder. The circuit which performs addition of
three bits (two significant bits and a previous carry) is a full-adder. Let us see the
logic circuits to perform half-adder and full-adder operations.
2.21.1 Half-Adder
The half-adder operation needs two binary inputs : augend and addend bits; and
two binary outputs : sum and carry. The truth table shown in Table 2.1 gives the
relation between input and output variables for half-adder operation.
Inputs
Outputs
Carry
Sum
Carry
Half
Inputs
adder
Outputs
Sum
For Carry
Carry AB
For Sum
Sum = AS AB
AB
2-8
Logic diagram
Limitations of Half-Adder :
In multidigit addition we have to add two
bits alongwith the carry of previous digit
addition. Effectively such addition requires
addition of three bits. This is not possible
with half adder. Hence half-adders are not
used in practice.
2J.11 Full-Adder
A full-adder is a combinational circuit that forms the arithmetic sum of three input
bits. It consists of three inputs and two outputs. Two of the input variables, denoted
by A and B, represent the two significant bits to be added. The third input
represents the carry from the previous lower significant position. The troth table for
full-adder is shown in Table Z2.
Outputs
inputs
C*
Carry
Sum
0
1
29
Logic diagram
A
Sum
Sum
A B C,n + A B Q + A B Cj + A B Cjn
=
=
Cj (A B + AB)+ C, (A B+ A B)
Cin(A O B)
C; (A B)
+Cln (A B)
With this simplified Boolean function circuit for full-adder can be implemented as
shown in the Fig. 2.7.
Sum
=
=
Cj(AB)
Cjn ffi(AB +AB)
Cjn (A B+ A B) + C(A B + A B)
A~B)+Cj(AB + AB)
Cj(AB-
Cin[(A+ BMA+B)l+Cjn(AB+AB)
2-10
=
=
ABCin +ABCm
+ A BCin
+ABCin
C,,
AB + CfAB+AB)
AB + A
AB Cjn + AB
AB +
AB +
BCin + A BCln
(, + 1
q,, + 1 = 1
BCin + A BCl(l
A Cta (B+ B)+ A BCin
ACj,, + A BCm
+ A
=1
AB + A C,,, + BCj,, (A + A)
~ AB + A
First half-adder
+ BC
Second halt-adder
2-11
D-flip-flip is used to store the carry output generated after addition. This carry is
used as carry input for the next addition. Initially, the D-flip-flop is cleared and
addition starts with the least significant bits of both register. After each dock pulse
data within the right shift registers are shifted right, 1-bit and we get bits from next
digit and carry of previous addition as new inputs for the full adder. The sum bit of
the full adder is connected as a serial input of shift register A. Thus the result of serial
addition gets stored in the register A. The new number can be added to the contents
of register A by loading a new number into register B and repeating the process of
serial addition.
2-12
It should be noted that cither a half-adder can be used for the least significant
position or the carry input of a full-adder is made 0 because there is no carry into the
least significant bit position.
Co-1
2-13
Point Operations
are complemented and carry zero (Cg) is set to one. Therefore n-bit adder gives result
as R=a+b+l, where b+ 1 represents 2's complement of number b. Fig. 2.12 (b) shows
Add/
Subtract
coot rot
Vt
bh-i
>\vi
2-14
Addition of the LSB position produces a carry into the second position. This carry,
when added to the bits of the second position (stage), produces a carry into the third
position. The latter carry, when added to the bits of the third position, produces a
carry into the last position. The key thing to notice in this example is that the sum bit
generated in the last position (MSB) depends on the carry that was generated by the
addition in the previous positions. This means that, adder will not produce correct
result until LSB carry has propagated through the intermediate full-adders. This
represents a time delay that depends on the propagation delay produced in each
full-adder. For example, if each full-adder is considered to have a propagation delay
of 30 ns, 'then Sg will not reach its correct value until 90 ns after LSB carry is
generated. Therefore, total time required to perform addition is 90+30= 120 ns.
Obviously, this situation becomes much worse if we extend the adder circuit to
add a greater number of bits. If the adder were handling 16-bit numbers, the carry
propagation delay could be 460 ns.
Generally, carry propagation delay and sum propagation delay are measured in
terms of gate delays. Looking at Fig. 2.8 we can notice that for full-adder Cou, requires
two gate delays and sum requires only one gate delay. When we connect such full
adder circuits in cascade to generate n-bit ripple adder as shown in the Fig. 2.10, C,
is available in 2(n-l) gate delays, and Sn_, is correct one XOR gate delay later. TTic
final cany-out, C is available after 2n gate delays. Thus for 4-bit ripple adder C4 is
available after 8(2x4) gate delays, C3 is available in 6[2(4 1)] gate delays and Sj is
available in 7 gate delays.
2-15
O-
Fig. 2.13 Full adder circuit
Consider the circuit of the full adder shown in Fig. 2.13. Here, we define two
functions : cany generate and carry propagate.
Pj
Aj Bj
Gj
S,
Q +i
= PiQ
G, + P, C(
G, is called a carry generate and it produces on carry when both A, and B, are
one, regardless of the input carry. P, is called a carry propagate because it is term
associated with the propagation of the carry from C( to Q4l. Now Q.j can be
expressed as a sum of products function of the P and G outputs of all the preceding
stages. For example, the carriers in a four stage carry-lookahead adder are defined as
follows :
2-16
Ci Go + PoCj,,
C2 = G] + Pj Cj = Gj + P]Go + Pj P0 Cm
C3 = Gj + Pj C2 = Gj + Pj G) + Pj PJ GQ + Pj Pj P0
G * Gj + Pj CJ
= Gj + Pj G2 + P3 P2 Gj + Pj Pj P] GQ + Pj P2 Pi Pfl C,,,
Fig. 2.14 shows the general form of a carry-lookahead adder circuit designed in
this way.
An-1 n 1
*n-2 0n-2
*0
We can further simplify the design by noting that the sum equation of stage i
(Refer Appendix-A for details.)
as S( P, G, C,
S, = Aj ffi B( C,
2 17
delays after the signals A, B and Cj are applied. In comparison note that a 4-bit
ripple carry adder requires 7 gate delays for S3 and 8 gate delays for C4.
A complete 4-bit carry-loolcahcad adder in a block form is shown in the
Rg.2.15.
We can cascade such 4-bit carry-lookahead adders to form a 16-bit or 32-bit adder.
We require four 4-bit carry-lookahead adders to form 16-bit carry-lookahead adder
S3
Aa
B3
a2
b2
AI
S0
S,
S2
BI
A)
Bo
Fig. 2.16
and eight 4-bit carry-lookahead adders to form 32-bit carry-lookahead adder. In 32-bit
adder, the carry out C4 form the low-order 4-bit adder is available 3 gate delays after
the input operands A, B and Cg are applied to the 32-bit adder. Then Cg is available at
the output of the second adder after a further 2 gate delays, C12 is available after a
further 2 gate delays, and so on. Finally, Cjg the carry-in to the high-order 4-bit adder,
is available after a total of (6 x 2) + 3 15 gate delays, C32 is available after 2 more
_ _ ...
2-18
gate delays, i.e. after 17 gate delays and S31 is available after 3 gate delays, i.e. 18 gate
delays. These gate delays are very less compared to total delays of 63 and 64 for Sj,
and C3Iif ripple-carry adder is used.
Multilevel Generate and Propagate Functions
In the 32-bit adder just discussed, the carriers C4, Cg, C12,
ripple through the
4-bit adder blocks with two gate delays per clock. This is analogous to the way that
individual carries ripple through each bit stage in a ripple-carry adder. By using
multi-level block generate and propagate functions, it is possible to use the lookahead
in parallel, in a multi-level
approach to develop the carries C4, Cg, C12,
carry-lookahead circuit. The Fig. 2.17 shows a 16-bit adder implemented using four
4-bit adder blocks. Here, blocks provide new output functions defined as Gj, and Pj.,
where K 0 for the first 4-bit block, K 1 for the second 4-bit block and so on. In the
first block,
pi
W1P0
r0
and
yisu
xii-s
rii-a
*7-4
y?-t
*so
ya
O".
Fig. 2.17 16-bit carry-lookahead adder built from 4-blt adders
Therefore, we can use first-level G, and P, functions to determine whether bit
stage i generates or propagates a carry, and we can use the second level G and
functions to determine whether block K generates or propagates a carry. With these
new functions available, it is not necessary to wait for carries to ripple through the
4-bit blocks. Looking at Fig. 2.17, we can determine CI6 as
CI6 =
2*19
The above expression is identical in form to the expression for Cv- only variable
names arc different Therefore, the structure of the carry-lookahead circuit in the
Fig. 2.17 is identical to the carry-lookahead circuit shown in the Fig. 2.16. However, it
is important to note that the carries C4, Cg, C]2 and Ci6 generated internally by the
4-bit adder blocks are not needed in the Fig. 2.17 because they are generated by the
multi-level carry-lookahead circuits.
Let see the delay in producing outputs from the 16-bit carry-ahead adder. The
delay in developing the carries produced by the carry-lookahead circuits is two gate
functions. The G [<. and
delays more than the delay needed to develop the G and
functions require two gate delays and one gate delay, respectively, after the
generation of Gl and P(. Therefore, all carries produced by the carry-lookahead circuits
are available 5 gate delays after A, B and Q, are applied as inputs. The carry CJJ is
generated inside the higher-order 4-bit block in the Fig. 2.17 in two gate delays after
Cu followed by S15 in one further gate delay. Therefore, S,s is available after 8 gate
delays. These delays, 5 gate delays for C16 and 8 gate delays for Sls are less as
compared to 9 and 10 gate delays for C16 and St5 in cascaded 4-bit cany-lookahead
adder blocks, respectively.
We can cascade two 16-bit adders to implement 32-bit adder. Here, only two more
gate delays are required to get C22 and Sgj and C16 and S15, respectively. Therefore,
C32 is available after 7 gate delays and SJJ is available after 10 gate delays. These
delays are less compared to 18 and 17 gate delays for the same outputs if the 32-bit
adder is built from a cascade of eight 4-bit adders.
If we go for third level then we can bulit 64-bit using four 16-bit adders. Delay
and 7 gate delays for C
through this adder will be 12 gate delays for
signed numbers.
1101
1001
1101
0000
0000
1101
1110101
(13)
(9)
Multiplicand
Multiplier
Partial Products
2-20
Fig. 2.18 shows the usual algorithm for multiplying positive numbers by hand.
Looking at this algorithms we can note following points :
In the binary system the partial products are easily defined. When the
multiplier bit is 0, the partial product is 0, and when the multiplier is 1, the
partial product is the multiplicand.
MM
d
| B
Bo
rvbit bus
'n-
Add
Shift and
..... ...... .
rv-Bit Adder
Logic
St
RHMM
1 bit
Register
I A< I *
-. R.jh!
1 Q 1 I
Muttipker
2. If bit 0 (QQ) is one then multiplicand and partial product are added and all
bits of C, A and Q registers are shifted to the right one bit, so that the C bit
2 21
goes into An_j , A0 goes into Qn_j, and Q0 is lost. If bit 0 (Q0) is 0, then no
addition is performed, only shift operation is carried out
3. Steps 1 and 2 are repeated n times to get the desired result in the A and Q
registers.
A flowchart for multiplication operation is shown in Fig. 2.20.
I1
1 0 1
l0I I0
0 0 P
1 I
2-22
1 0 1 1
110 1
0 110
10 11
110
Add
Shift
|First cycle
0 0 11
10 0 1
110 1
1 1 1
Add
Shift
|Second cycle
10 0 1
0 10 0
1110
1 1 1 Is
0 0 0 1
1 .1 1 1
1111
10 0 0
.Add
Shift
|Fourth cycle
Final Product
0 0 0 0 1 0 (2)
0 0 1110 (14)
2-23
0+100-10
Solution
-1
+1 0
Multiplier
Recoded multiplier
-1
fbrBoothsmiMication.
zcro
Oil 0 01
Multiplier
Recoded multiplier
+ 1 0-1 0+1-1
The Fig. 2.22 shows the Booth's multiplication. As shown in the Fig. 2.22,
whenever multiplicand is multiplied by -1, its 2's complement is taken as a partial
Solution :
result.
Multiplier : 0 0 1 1 0 0
Recoded multiplier :0 t 0-1 00
Multiplicand
: 0 1 0 0 1 1.
Multiplication
10
+10
10
11
-10
0
0
2s complement of
Vie multiplicand
2-24
The same algorithm can be used for negative multiplier, This is illustrated in the
following example.
ine* Example 2.7 : Multiply 0 1 1 1 0 (+14) and 11011 (-5).
Solution :
0
1110
(14)
10 11
10-1
(- 5)
Multiplicand
Multiplier
Recoded Multiplier
-1
Multiplication :
-1
-1
0
(-70)
The same algorithm also can be used for negative multiplier and negative
multiplicand. This is illustrated in the following example.
Example 2.8 : Explain the following
Multiplicand : 1 1 0 0 1 1 (-13)
Multiplier : 101 100 (-20)
Solution :
1
-1
10
Multiplier
10-10
Recoded Multiplier
2 25
Multiplication :
Multiplicand
-1
Recoded Multiplier
Zs complement of the
multiplicand
Zs complement of the
multiplicand
(260)
The Booth's algorithm can be implemented as shown in the Fig. 2.23. The circuit is
similar to circuit for positive number multiplication. It consists of n-bit adder, shift,
add subtract control logic and four registers, A, B, Q and Q_j. As shown in the
Fig. 2.23 multiplier and multiplicand are loaded into register Q and register B,
respectively, and register A and Q_: are initially set to 0.
Multiplicand
Initial settings : A
0 and Q_,
=0
2-26
The n-bit adder performs addition of two inputs. One input is the A register and
0
other input is multiplicand. In case of addition. Add/sub line is 0, therefore
and multiplicand is directly applied as a second input to the mbit adder. In case of
subtraction. Add/sub line is 1, therefore 0=1 and multiplicand is complemented
and then applied to the n-bit adder. As a result, the 2's complement of multiplicand is
added in the A register.
The shift, add and subtract control logic scans bits QQ and Q_t one at a time and
generates the control signals as shown in the Table 2.3. If the two bits are same
(1 - 1 or 0 - 0), then all of the bits of the A, Q, and Q_ t registers are shifted to right
1-bit without addition or subtraction (Add/subtract Enable = 0). If the two bits are
differ, then the multiplicand ( B-register) is added to or subtracted from the A register,
depending on the status of bits. If bits are QQ = 0 and Q_j = 1 then multiplicand is
added and if bits are Q0= 1 and Q_j = 0 then multiplicand is subtracted. After
addition or subtraction right shift occurs such that the leftmost bit of A (A n_j) is not
only shifted into An_2, but also remains in An_j. This is required to preserve the sign
of the number in A and Q. It is known as an arithmetic shift, since it preserves the sign
bit
Qo
Q-t
Add/sub
Add/Subtract Enable
Shift
Table 2.3 Truth table for shift, add and subtract control logic
The sequence of events in Booth's algorithm can be explained with the help of
flowchart shown in Fig. 2.24.
Let us sec the multiplication of 4-bit numbers, 5 and 4 with all possible
combinations.
2-27
Q-t
Operation
0 0 0 0
0 1 0 0
Initial
Step 1 :
0 0 0 0
0 0 1 0
Shift right
Step 2 :
0 0 0 0
0 0 0 1
Shift right
Step 3 :
1 0 1 1
0 0 0 1
1 1 0 1
1 0 0 0
Shift right
0 0 1 0
1 0 0 0
A - A
0 0 0 1
0 1 0 0
0 0 0 1
0 1 0 0
Step 4 :
Result :
- A - B
Shift right
20
2 28
0 1 0 1 (5)
Q-i
Operation
0 0 0 0
1 1 0 0
Initial
Step 1 :
0 0 0 0
0 1 1 0
Shi* right
Step 2 :
0 0 0 0
0 0 1 1
Shift right
Step 3 :
1 0 1 1
0 0 1 1
1 1 0 1
1 0 0 1
Shift right
1 1 1 0
1 1 0 0
Shift right
Steps
Step 4 :
Rosutt :
1 1 1 0
1 1 0 0
*-
1 0 11 (-5)
- 0 10 0 (4)
Multjplier(Q)
Q-1
Operation
0 0 0 0
0 10 0
Initial
Step 1 :
0 0 0 0
0 0 10
Shift right
Step 2 ;
0 0 0 0
0 0 0 1
Shift right
Step 3 :
0 10 1
0 0 0 1
A - A
0 0 10
10 0 0
Shift right
110 1
10 0 0
A -A
1110
110 0
Steps
Step 4 :
Result :
1110
1100
-B
Shift right
2 29
-5 x-4 )
Multipkcand(B) - 1 0 1 1 (-5)
Muliplier(Q)
- 1 1 0 0 (-4)
Q-t
Operation
0 0 0 0
110 0
Initial
0 0 0 0
0 110
Shift right
Step 2 :
0 0 0 0
0 0 11
Shift right
Step 3 :
0 10 1
0 0 11
A - A - B
0 0 10
10 0 1
Shift right
Step 4 :
0 0 0 1
0 10 0
Shift right
Result :
0 0 0 1
Steps
Step 1
0 10 0
20
2-30
Multiplier bit
on the right
Multiplier
bit-pair
Bit-pair recoded
multiplier bit at
position I
i1
i- 1
-2
-1
-1
Table 2.4
Example 2.9 : Find the bit-pair code for multiplier.
11010.
[T]
1-
II
0
1
||
0(o] - Implied 0 to
nghlotLSB
-2
-1
Example 2.10 : Multiply given signed 2's complement numbers using bit-pair
recoding A=110101 multiplicand ( -11)
(+27)
B = 011 01 1 multiplier
Solution : Let us find the bit-pair code for multiplier.
0
11
||
110
|
2
-1
[6]
|
Implied 0 to
right of LSB
-1
Multiplication :
1
-1
-1
*-
2's complement of
2s complement of
*-
Multiplicand x (+2)
the multiplicand
the multiplicand
0
(-297)
2-31
u) 169
12
49
48
Quotient
Dividend
1110
Divisor
Partis]
Remainder
1100
) 10101001
1100
Quotient
Dividend
10010
1100
01100
1100
00001
Remainder
In both the divisions, division process is same, only in binary division quotient
bits are 0 and 1. We will now see the binary division process in detail. First, the bits
of the dividend are examined from left to right, until the set of bits examined
represents a number greater than or equal to the divisor; this is referred to as the
divisor being able to divide the number. Until this condition occurs, Os are placed in
tire quotient from left to right When the condition is satisfied, a 1 is placed in the
quotient and the divisor is subtracted from the partial dividend. The result is referred
to as a partial remainder. From this point onwards, the division process follows
repetition of steps. In each repetition cycle, additional bits from the dividend are
brought down to the partial remainder until the result is greater than or equal to the
divisor, and the divisor is subtracted from the result to produce a new partial
remainder. The process continues until all the bits of the dividend are brought down
and result is still less than the divisor.
2-32
Divisor
B).
3. If the sign bit of A is 1, set Q0 to 0 and add divisor back to A (that is,
restore A); Otherwise, set Q0 to 1.
2-33
2 34
Let us see one example. Consider 4-bit dividend and 2-bit divisor :
Dividend
Divisor
10 10
=0011
Q Register
Initially
0 0 0 0 0
1 0 1 0
Shift
Subtract B
0 0 0 0 1
1 1 1 0 1
1 1 1 0
0 1 0
setQg
Restore (A+B)
Shift
Subtract B
setQg
Restore (A+B)
Shift
Subtract B
setQg
0 0 0 1 1
0 0 0 0 1
0 0 0 1 0
1 1 1 0 1
1 1 1 1
90 0
First Cycle
1 0
0 0 1 0 1
1 1 1 0 1
(S) 0 0 1 0
setQg
0D
Second Cycle
~1
00
0 000
1 0
Third Cycle
0
Shift
Subtract B
"1
0
1
0 0 0 1 0
00[1]
0 0 1 0 0
1 1 1 0 1
0 0 0 1
Remainder
Fourth Cycle
000
Note :
Quotient
2 35
The division algorithm just discussed needs restoring register A after each
unsuccessful subtraction. (Subtraction is said to be unsuccessful if the result is
negative). Therefore it is referred to as restoring division algorithm. This algorithm is
improved, giving non-restoring division algorithm. Consider the sequence of operations
that takes place after the subtraction operation in the restoring algorithm.
If A ia positive
Shift left and subtract divisor -* 2A
If A la negative
-B
Restore -* A + B
Shift left and subtract divisor-* 2 ( A + B )
-B
2A B
Looking at the above operations we can write following steps for non-restoring
algorithm.
Step 1 : If the sign of A is 0, shift A and Q left one bit position and subtract divisor
from A; otherwise, shift A and Q left and add divisor to A. If the sign of A is 0, set
Qo to 1; otherwise, set Q0 to 0.
Step 2 : Repeat steps 1 'and 2 for n times.
Step 3 : If the sign of A is 1, add divisor to A.
Note : Step 3 is required to leave the proper positive remainder in A at the end of n
cycles.
Dividend
10 10
Divisor
0011
2-36
Point Operations
A Register
2-37
Register
Initially
0 0 0 0 0
10 10
Shift
0 0 0 0 1
1 1 1 0 1
1 1 1 0
0 1 0
0 1 0
[o]
1 1 1 0 0
0 0 0 1 1
1 1 1 1
o 0Q
o @[o]
Subtract
set Q0
Shift
Add
set Qo
Shift
Add
setQo
Shift
Subtract
1 1 1 1 1
0 0 0 1 1
0 0 1 0
0 0 1 0 0
1 1 1 0 1
0 0 0 1
Remainder
Dividend
First Cycle
Second Cycle
o
o
[o][o][T]
Third Cyde
0GD0D
000[j]
Fourth Cyde
Quotient
In the above example after 4 cycles register A is positive and hence step 3 is not
required. Let us see another example where we need step 3. Consider 4-bit dividend
and 2-bit divisor :
Dividend =10 11
Divisor
0101
2-38
A Register
Q Register
Initially
0 0 0 0 0
10 11
Dividend
Shift
0 0 0 0 1
1 10 11
110 0
_ |>
0110 f
First Cycle
Subtract
set Q0
Shift
Add
1
0
setQ(,
Shift
Add
0 1 1
0DD
i 1
[5]
1 0 0 0
0 1 0 1
1 0 1
0 0 0 0 1
1 10 11
110 0
Second Cycle
00D
1 GO [7]
1 1 0 1 1
0 0 1 0 1
(5) o 0 0 0
Shift
Subtract
Third Cycle
BQDHID
Fourth Cycle
Quotient
Add
1110 0
0 0 10 1
0 0 0 0 1
Restore Remainder
Remainder
Restoring
No
Non-restoring
1.
2.
3.
4.
Slower algorithm.
Faster algorithm.
2 39
Point Operations
...
*>)
With this fractional number system we can represent the fractional numbers in the
following range :
ZK"'1) FS
2("31)
The floating point representation has three fields : sign, significant digits and
exponent. Let us consider the number 111101.1 000110 to be represented in
the floating point format. To represent the number in floating point format, first
binary point is shifted to right of the first bit and the number is multiplied by the
correct scaling factor to get the same value. The number is said to be in the
normalized foqn and is given as
111101.1000110
,
1.11101100110 x
Significant Digits
Normalized form
2s
--H
Factor:
It is important to note that the base in the scaling factor is fixed, 2 and therefore,
it does not need to appear explicitly in the machine representation of a floating point
number. So finally, three fields sign, significant digits and exponent represent floating
2 - 40
point number system. The string of the significant digits is commonly known as
mantissa.
=
=
Exponent
11101100110
5
In floating point numbers, bias value is added to the true exponent. This solves
the problem of representation of negative exponent. Due to this the magnitude of two
numbers can be compared by doing arithmetic on the exponent first.
Is J
Sign of
number :
1 signifies
8-bit signed
exponent in
0 signifies +
32-bits
23 22
31 30
excess-127
representation
__
23-bit
mantissa fraction
Value represented
-12?
= 1.M x 2
64-bits
63 62
lsl
Sign
52 51
E'
11-bit
excess-1023
exponent
52-bit
mantissa fraction
Value represented
1.M x 2
Fig. 2.32
2-41
Point Operations
The 32-bit standard representation shown in Fig. 232 (a) is called a single
precision representation because it occupies a single 32-bit word. The 32-bits are
divided into three fields as shown below :
(field 1) Sign < 1-bit
(field 2) Exponent 8-bits
(field 3) Mantissa
23-bits
The sign of the number is given in the first bit, followed by a representation for
the exponent ( to the base 2 ) of the scale factor. Instead of the signed exponent, E, the
value actually stored in the exponent field is E' = E + bias. In the 32-bit floating point
E + 127. This representation of
system (single precision), bias is 127. Hence E'
exponent is also called the excess-127 format. The end values of E' , namely, 0 and 255,
are used to indicate the floating point values of exact zero and infinity, respectively in
single precision. Thus range of E' for normal values in single precision is 0 < E' < 255.
This means that for 32-bit representation the actual exponent E is in the range
- 126 E 127.
The 64-bit standard representation shown in Fig. 2.32 (b) is called a double
precision representation because it occupies two 32-bit words. The 64-bits are divided
into three fields as shown below :
(field 1) Sign < 1-bit
52-bits
In the double precision format value actually stored in the exponent field is given
as
E'
E + 1023
Here, bias value is 1023 and hence it is also called excess-1023 format. The end
values of E' , namely, 0 and 2047, are used to indicate the floating point exact values
of exact zero and infinity, respectively. Thus the range of E' for normal values in
double precision is 0 < E' < 2047. This means that for 64-bit representation the actual
exponent E is in the range
- 1022
ES1023.
Example 2.11 :
2-42
Represent 1259.1 25
formats.
Solution
Integer Part :
78
16 ) 1259
16) 78
112
0139
128
Oil
64
4EBH
14
Fractional Part :
0.125x2
0.25x2
0.5 x 2
0
=
=
0.25 0
0.5 0
1.0 1
= 0.001
Binary number
10011101011+ 0.001
10011101011.001
1001110101 1. 001
Now we will see the representation of the numbers in single precision and double
precision formats.
Single Precision
Sign
1000 1001
Exponent
--0011101011001
Mantissa
"
Double Precision
For a given number
S = 0,
E = 10,
__ __
and M
= 0011101011001
E'
=
=
2-43
= 1023
1000000100 12
.0
Sign
Example 2.12 :
<
Exponent
0
'
Mantissa
- 307.187510
Represent
formats.
Solution :
0011101011001
10001001
16)307
16
147
144
= 133H
1_ 0011 0011
3
Integer Part :
Fractional Part :
0.1875 x 2
0.3750 x 2
0.750 x 2
=
=
0.5x2
0.3750
0.750
1.5
1.0
0
0
1
1
Binary number
=
=
-100110011 + .0011
-10011001 1. 0011
0.0 0 1 1
2-44
- 1 0 0 1 1 0 0 1 1 .0 0 1 1 = - 1.0 0 1 1 0 0 1 1 0 0 1 1 x28
Now we will see the representation of the numbers in single precision and double
precision formats.
Single Precision
For a given number
5 = 1,
= 8,
and M
E'
E + 127 = 8 + 127
100001112
= 13510
'
Sign
Exponent
Mantissa
Double Precision
S = 1,
=8
and M
E'
= 1023
100000001 112
Exponent
-*
>.
Mantissa
Solution :
2-45
Fractional Part :
0.0625 x 2
0.125 x 2
0.250 x 2
=
=
=
0.5x2
0
0.125
0.250
0.5
1.0
=
=
Binary number
0
0
1
0.0001
0.0001
0.0001
1.0
2-1
Now we will see the representation of the numbers in single precision and double
precision formats
Single Precision
S = 0,
= - 4,
and M
E'
.\
=
=
E + 127 =
= 0001
= 127
- 4 + 127 = 12310
OllllOllj
'
Sign
0111 1011
00010000000000
Exponent
Mantissa
Double Precision
For a given number
S = 0,
= - 4,
and M
E'
= 0001
= 1023
E + 1023
= - 4 + 1023 = 1019
= OlllllllOllj
2-46
on 1111 ion
Sign
Exponent
oooiooooooooo
Mantissa
...
Solution : Given : n
= 3.243F6A8885A308D3
1.1001 0010 0001 1111 1011 0101 0100 0001 0001 0000 1011 0100 0110 0001 0001
1010 0110 x 2*
S = 0, E
= 1,
E'
j0
1000 0000
S0 Exponent
Mantissa
.-.S = 0, E
= 1, M = 1001 0010 0001 1111 1011 0101 0100 0001 0001 0000 1011 0100 0110
E'
E + 1023 = 1024
0 1000 0000 000 1001 0010 0001 1111 1011 0101 0100 0001 0001 0000 1011 0100 0110
SS11
Exponent
Mantissa
2-47
The extreme values of biased exponent, E' are used to represent special values.
When E' = 0 and the mantissa fraction M is zero, the value exact 0 is represented. The
biased exponent with all Is is reversed to represent infinity, where inifinity is the
result of dividing a normal number by zero. The sign bit is still part of these
representations, so there are 0 %nd representations.
2.8.3 Exceptions
According to IEEE standards, the processor sets exception flags if underflow,
overflow, divide by zero, inexact or invalid conditions occur during the program
execution.
Underflow : In a single precision, if the number requires an exponent less than
126 or in a double precision, if the number requires an exponent less than 1022 to
represent its normalized form the underflow occurs.
Divide by zero : Divide by zero exception occurs when any number is divided by
zero.
Inexact : Inexact is the name for a result that requires rounding in order to be
represented in one of the normal formats.
Invalid : An invalid exception occurs if operations such as 0/0 or -Jl are
attempted. When exception occurs, the results are set to special values. System and
user defined routines are used to handle such exceptions.
2-48
=
=
rri]
rcl and
m2 r2 Assume : C] >e2
Let us see the rules for addition and subtraction.
Step 1 :
Select the number with a smaller exponent and shift its mantissa right, a
number of steps equal to the difference in exponents |e2 e j|. For
examples, if the numbers are 1.75 xlO2 and 6.8 xlO4, then the number
1.75 xlO2 is selected and converted to 0.0175 xlO4.
Step 2 :
Step 3 :
Step 4 :
Let us perform the addition and subtraction of single precision numbers using
above rules.
Example 2.15 : Add single precision floating point numbers A and B, where
A = 44900000H and B = 42AOOOOOH
Solution :
:.
.*.
Step 1:
1000
1001
0010
000
...
1000
0101
0100
000
... 0
Exponent for A
1 0 0 0
Actual exponent
137
Exponent forB
1 0 0 0 0101 = 133
Actual exponent
1 0 0 1
- 127 (Bias) = 10
-
137
=6
.. Number B has smaller exponent with difference 4. Hence its mantissa is shifted
right by 4-bits as shown below.
2-49
00 000100
.... 0
Mantissa ofA
00100000
Mantissa ofB
00000100
.... 0
.... 0
Mantissa of Result
00100100
.... 0
Result
44920000 H
10001001
00100100... 0
We have seen addition process for floating point numbers. Similar process is used
for subtraction of floating point numbers. In subtraction, two mantissas are subtracted
instead of addition and the sign of greater mantissa is assigned to the result.
Subtract single point precision floating point numbers A and B
4
49
000 0 OH and B 42A00000H.
=
Example 2.16 :
( A -B), where A
Solution :
=0
1 0 0 0 1 0 0 1
0 0 1 0 0 0 0 ....0
=0
1 0 0 0 0 1 0 1
0 1 0 0 0 0 0
Exponent for A
Actual exponent
Exponent for B
10001001 = 137
1000
133
= 10
0101
133
Actual exponent
.... 0
- 127 (Bias )
2 50
Number B has smaller- exponent with difference 4. Hence its mantissa is shifted
right by 4 bits as shown below :
Step 2 : Shift Mantissa
= 00G0
0 100... 0
0 0 1 0 0 0 0 0
Mantissa ofB
00000100
.... 0
.... 0
0 0 01 1 1 0 0
..... 0
Mantissa for A is greater than mantissa for B therefore sign of result is sign of A.
.-.
Result
=
=
10001001
00011100... 0
448E0000H
2-51
2J.4.3 Flowchart and Algorithm for Floating Point Addition and Subtraction
Fig. 2.33 shows flowchart for floating point addition and subtraction.
Point Operations
Algorithm : Referring the flowchart given in Fig. 2.32 it is possible to write algorithm
for floating point addition as follows.
Declare registers : AM [0 : M 1]
; For Mantissa of A
BM [0 : M 1]
AE (0 : E 1]
BE [0 : E 1]
E [0 : E - 1]
; For Mantissa of B
; For exponent of A
; For exponent of B
; For temporary storage of exponent
AM < Mantissa of A
BM < Mantissa of B
AE - exponent of A
BE exponent of B
Step 2 : Compare the two exponents : E = AE BE
Step 3 : Equalize
if E < 0 then
right shift AM ;
E = E + 1 and go to step 3 ;
if E > 0 then
right shift BM ;
E = E 1 go to step 3 ;
AM = AM + BM and E Max (AE, BE)
Step 4 : Add Mantissas :
Step 5 : Check exponent overflow :
if overflow occurs during mantissa addition then
if E = Emax then Report overflow
else
right shift AM
E = E + 1 go to stop
Step 6 : Adjust for zero result
if AM = 0 then E = 0 ; go to stop
Step 7 : Normalize result
if AM normalized then go to stop
2 53
SR
ER
1-bit
8-bit
MR
23-bit
32-bit result
Fig. 2.34
2-45
Computer Architecture
Arithmetic Unit
or
E = EA
E = EB
EA
EA < EB
if
EB
In step 4, the result of the mantissa (M) is normalized. The normalized value is
truncated to generate the 23-bit mantissa; MR, of the result. The leading zeros detector
determines the number of bit shifts, X, to be applied to mantissa (M). The value X is
then subtracted from the tentative resulted exponent E to generate the true result
exponent, ER.
2.8.4.5 Multiplication and Division
In floating point arithmetic, multiplication and division are somewhat easier than
addiion and subtraction because an alignment of mantissas is not required in
multiplication and division.
'!!
Computer Architecture
2-46
Arithmetic Unit
Computer Architecture
2-47
Arithmetic Unit
1. Subtract exponents and add bias ( 127 in case of single precision numbers and
1023 in case of double precision numbers ).
2. Divide the mantissas and determine the sign of tine result.
Arithmetic Unit
2-48
Computer Architecture
This is the simplest method of truncation. Here, the guard bits are
removed without making any changes in the retained bits.
For example, if we want to truncate a fraction from six to three bits by this
method we have to remove 3 least significant bits (right most). This is illustrated in
Fig. 2.32.
Original Number
0.
b_ y
b_ 2
b_ 3
Truncated Number
0.
b_ |
b_
b. 3
b_
b_ 5
b_ g
In our example, the error in the 3-bit result ranges from 0 to .000111. The value
.(XX)111 is almost equal to .001. Because of this in general terms we can say that, the
error in chopping ranges from 0 to almost 1 in the least significant position of the
retained bits. In our example, least significant position of the retained bit is b_ In
chopping method, the error range is not symmetrical about 0 and hence the result of
chopping is a biased approximation.
,.
If the bits to be removed are all 0s, they are simply removed, with no
changes to the retained bits.
However, if any of the bits to be removed are 1, the least significant bit of
the retained bits is set to 1.
Computer Architecture
2-49
Arithmetic Unit
- .If
In our 6-bit to 3-bit truncation we get results as shown in the Fig; 2.33.
Original
number
Truncated 0. b
number
_,
b_2
0. 6 . i
=000
b-3
0.
_, _
with
4 b
_5
b .6 * 0 0 0
Original Number
Truncated Number
0.010010
0.011010
0.011
0.011
Positive Error
Negative Error
Fig. 2.34
Rounding : It is the best method of truncation; however it is the most difficult to
implement. In this method, a 1 is added to the LSB position of the bits to be retained
if there is a 1 in the MSB position of the bits being removed. In our 6-bit to 3-bit
example we will get results as shown in Fig. 2.35.
Original
number
0.
b-1
with
Truncated
number
0.
b-i
-2
b-3 b 4 b 5
b-4 =
b-6
0.
b-t
with
b -2 b -3
0.
b -2 b -3 b
- b_
-4
b -5
b-$
=1
- 1 b - 2 b3 + 0.001
Truncated Number
0.011011
0.011100
0.011
0.100
Negative
error
Fig. 2.36
Positive error
Computer Architecture
2-50
Arithmetic Unit
Rounding can be implemented using only three guard bits. The first two of these
bits are the two most significant bits of the section of the mantissa to be removed. The
third bit is the logical OR of all bits beyond these first two bits in the full
representation of the mantissa. This bit can be easily maintained during the
intermediate steps of the operations to be performed. It should be initialized to 0. If a
I is shifted out through this position, the bit becomes 1 and retains that value; hence
it is usually called the sticky bit.
Solved Examples
Example 2.13.:
Solution :
Multiplicand (B) - 0 1 1 0 0 1 (25) Multiplier (Q) <- 1 1 0 0 0 0 (- 16)
Steps
Q-1
Operation
000000
1 10000
Initial
1.
000000
011000
Shift right
000000
001100
Shift right
000000
000110
Shift right
4.
000000
000011
Shift right
5.
100111
000011
A <- A
110011
100001
Shift right
111001
110000
Shift right
6.
Result : 1 1 1 0 0 1 110 0 0 0
=-
-B
Computer Architecture
Example 2.14 :
Solution :
2-51
Arithmetic Unit
Divident
Divisor
= 2510 = 11001
= 1510 = 01111
Initially
Shift
Subtract
Shift
Add
A Register
Q Register
0000000
0000001
1 1 1 0 0 0 1
10010
110 0 1
t 0 0 1
(pi
10 0
1100101
00 0 1 111
1 0 1 0 0
oo
(pi
Shift
Add
Shift
Add
oo
110 10 0 0
0 0 0 1 1 11
10111
0 1 0
Cpi
0 10
110 1110
0 0 0 1 1 1 1
11101
1 0
11110 11
0001111
0 0
(pi
Shift
Add
1 0
Qj)o o i o i o ""
Remainder
10
Dividend
}
-El
}
}
?
First cyde
Second cyde
Third cycle
|o|
Fourth cycle
|o)
Fifth cyde
Quotient
Fig. 2.37
> Example 2.15 : Explain Booth's
two's complement numbers.:
= 110011 multiplicand
= 101100 multiplier
Also, implement the above using Bit-Pair Recording and explain how it achieves faster
multiplication.
2-52
Computer Architecture
Arithmetic Unit
Multiplication
110
11
-1 -1
000000000
0
00
Multiplicand 0
7s complement ot
multiplicand
7s complement ol
00
(-260)
001000001
multiplicand
The Booth's algorithm may need the summation at each step and number of steps
required in Booth's algorithm are equal to length of multiplier in bits. The bit-pair
recoding halves the maximum number of summations. Hence it achieves faster
multiplication.
Computer Architecture
)* Example 2.16
Arithmetic Unit
2-53
Represent
(178.1875)m
format.
Solution : Step 1 : Convert decimal number in binary format
Integer part
:
B
11
16)178
176
02
(178)I0
(B2)IA
= (1011
0010):
Fractional part :
0.1875x 2
0.3750 x2
0.750 x2
0.5 x 2
0
=
.'.Binary number
= 1011
=
0.3750 0
0.750 0
1.5
1
1.0
0.0 0 1 1
0010 + 0.0011
1011 0010.0011
1011 0010.0011
1.01100100011 x
27
= 127
Kxponcnl
Computer Architecture
2-54
Arithmetic Unit
E'
= 1023
E + 1023 7 + 1023
1000 0000 110j
= 1030lo
Sign
Multiplicand
Multiplier
following :
= + 15
= - 6.
Solution :
00000
1010
Initial
1.
00000
0101
Shift right
2.
10001
0101
1 10 0 0
1010
Shift right
0 0 111
1010
A - A + B
0 0 0 11
1101
Shift right
1 0 10 0
1101
A-A-B
1 10 10
0110
Shift right
1 10 10
0110
3.
4.
Result
Example 2.18 :
algorithm.
Perform 1100
+ 11
-1
Operation
A - A
-B
Computer Architecture
Solution
2-55
Arithmetic Unit
A Register
InrtiaHy
Q Register
- Dividend
0 0 0 0 0
110 0
Shift
0 0 0 0 1
1 0 0
Subtract B
1110 1
l)l
Restore (A*B)
1 1 0
First cycle
ia
0 0 0 1 1
o o
0 0 0 0 1
Shift
0 0 0 1 1
o o
Subtract B
1110 1
0)0 0 0 0
t
Shift
0 0 0 0 0
1110 1
T)i
Second cycle
OOffll
Subtract B
<?
can
o o QJD
Third cycle
Restore (A*B)
0 0 0 0 0
o GDQ]
Shift
0 0 0 0 0
EKEEDD
Subtract B
1110 1
7)i
Fourth cycle
0 0 0 0 0
Restore (A+B)
Remainder
BBQ1B3
Quotient
2-56
Computer Architecture
Arithmetic Unit
Q Register
Initially
0 0 0 0 0
110 0
Shift
0 0 0 0 1
1 0 0
Subtract B
1110 1
1 0 0
[0]
mo
(p1
1110 1
Add
0 0 0 1 1
o o
ooo
o 03[T]
Shift
0 0 0 0 0
moo
Subtract B
1110 1
o Q][o] J
(ipo
<T) 1 1 o
Shift
Add
'
110 10
0 0 0 1 1
(p
Dividend
1 1 0
Shift
GD0GDD
'
[o][T][o][o]
Third cyde
Fourth cycle
1 1 0 1
Quotient
1110 1
Add divisor
0 0 0 1 1
0 0 0 0 0
>
J
Restore
Fig. 2.38(b)
Review Questions
1. What do you mean by fixed fwiitl numbers ?
2. Explain the hardware for implementation of integer addition and subtraction.
3. Write a short note on lookahead carry generator.
4. State the principle of operation of a carry -lookahead adder.
(CSE NovVDec.-2004,
IT Nov7Dec.-2003)
5. Show how a 64-bit adder can be constructed using 4-bit adder modules and 4-bit carry-lookahead
(IT April/May.-2003)
here?
generator modules. What is the delay in generating C64 and
Computer Architecture
2-57
Arithmetic Unit
Multiplicand 010111
12.
13.
14.
15.
16.
17.
18.
110110
Multiplier
show each step dearly indicating the Booth recoding.
Explain the modified Booth's algorithm.
(CSE NovVDcc.-2003)
Describe a Multiplication speed up technqiue.
Explain the hardware for implementation of integer division.
Explain restoring and non-restoring algorithms for integer division.
(CSE Nav./D.-2O04)
Stale the non-restoring division technique. Simulate the same for 20 + 8.
Discuss any one binary division algorithm and simulate the same for 25 + 15.
(LT. April/May-2003)
Describe the IEEE standards for single precision and double precision floating /mint numbers.