Nothing Special   »   [go: up one dir, main page]

module3,4,5_solutions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

1.With a neat diagram, explain the Internal Organization of 128 x 8 memory chip.

 Each memory cell can hold one bit of information.


 Memory cells are organized in the form of an array.
 One row is one memory word.
 All cells of a row are connected to a common line, known as the “word line”.
 Word line is connected to the address decoder.
 Sense/write circuits are connected to the data input/output lines of the memory chip.

2. Describe the working of static RAM memories


 Two transistor inverters are cross connected to implement a basic flip-flop.

 The cell is connected to one word line and two bits lines by transistors T1 and T2

 When word line is at ground level, the transistors are turned off and the latch retains its state

 Read operation: In order to read state of SRAM cell, the word line is activated to close
switches T1 and T2. Sense/Write circuits at the bottom monitor the state of b and b’

b b

T1 X Y T2

Word line
Bit lines
3.Analyse the working mechanism of Asynchronous DRAMS.

 Each row can store 512 bytes. 12 bits to select a row, and 9 bits to select a group in a row. Total of 21
bits.
• First apply the row address, RAS signal latches the row address. Then apply the column address, CAS
signal latches the address.
• Timing of the memory unit is controlled by a specialized unit which generates RAS and CAS.
• This is asynchronous DRAM

 to access the consecutive bytes in the selected row.


 This can be done without having to reselect the row.
▪ Add a latch at the output of the sense circuits in each row.
▪ All the latches are loaded when the row is selected.
▪ Different column addresses can be applied to select and place different bytes on the data lines.
 Consecutive sequence of column addresses can be applied under the control signal CAS, without
reselecting the row.
▪ Allows a block of data to be transferred at a much faster rate than random accesses.
▪ A small collection/group of bytes is usually referred to as a block.
 This transfer capability is referred to as the
fast page mode feature.
4.Analyse how data are written into Read Only Memories (ROM). Discuss different types of Read Only
Memories.
Read-Only Memory:
▪ Data are written into a ROM when it is manufactured.

 Non-volatile memory is read in the same manner as volatile memory.


▪ Separate writing process is needed to place information in this memory.
▪ Normal operation involves only reading of data, this type
of memory is called Read-Only memory (ROM).
BELOW DIAGRAM SHOWS ROM Cell
Logical value 0 stored in the cell if the transistor is connected to ground p, otherwise 1 is stored. The bit line
connected through resistor to the power supply. To read state of the cell word line is activated data are
written into ROM when it manufactured.

 Programmable Read-Only Memory (PROM):


▪ Allow the data to be loaded by a user.
▪ Process of inserting the data is irreversible.
▪ Storing information specific to a user in a ROM is expensive.
▪ Providing programming capability to a user may be better.
 Erasable Programmable Read-Only Memory (EPROM):
▪ Stored data to be erased and new data to be loaded.
▪ Flexibility, useful during the development phase of digital systems.
▪ Erasable, reprogrammable ROM.
▪ Erasure requires exposing the ROM to UV light.
 Electrically Erasable Programmable Read-Only Memory (EEPROM):
▪ To erase the contents of EPROMs, they have to be exposed to ultraviolet light.
▪ Physically removed from the circuit.
▪ EEPROMs the contents can be stored and erased electrically.
 Flash memory:
▪ Has similar approach to EEPROM.
▪ Read the contents of a single cell, but write the contents of an entire block of cells.
▪ Flash devices have greater density.
 Higher capacity and low storage cost per bit.
▪ Power consumption of flash memory is very low, making it attractive for use in equipment that
is battery-driven.
▪ Single flash chips are not sufficiently large, so
larger memory modules are implemented using
flash cards and flash drives.

5.What is Cache memory? Analyse the three mapping functions of Cache memory.
Cache memory is an architectural arrangement which makes the main memory appear faster to the
processor than it really is.

Mapping functions determine how memory blocks are placed in the cache
Three mapping functions:
▪ Direct mapping
▪ Associative mapping
▪ Set-associative mapping.
Direct mapping

• Block j of the main memory maps to j modulo 128 of


the cache. 0 maps to 0, 129 maps to 1.
• More than one memory block is mapped onto the same
position in the cache.
• May lead to contention for cache blocks even if the
cache is not full.

Associative mapping
• Main memory block can be placed into any cache
position.
• Memory address is divided into two fields:
- Low order 4 bits identify the word within a block.
- High order 12 bits or tag bits identify a memory
block when it is resident in the cache.
• Flexible, and uses cache space efficiently.
• Replacement algorithms can be used to replace an
existing block in the cache when the cache is full

Set-associative mapping.

Blocks of cache are grouped into sets.


Mapping function allows a block of the main
memory to reside in any block of a specific set.
Divide the cache into 64 sets, with two blocks per set.
Memory block 0, 64, 128 etc. map to block 0, and they
can occupy either of the two positions.
Memory address is divided into three fields:
- 6 bit field determines the set number.
- High order 6 bit fields are compared to the tag
fields of the two blocks in a set.

6. What is interleaving? Explain with two methods of addressing layout.

 Divides the memory system into a number of memory modules. Each module has its
own address buffer register (ABR) and data buffer register (DBR).

 Arranges addressing so that successive words in the address space are placed in
different modules.

 When requests for memory access involve consecutive addresses, the access will be to
different modules.

 Since parallel access to these modules is possible, the average rate of fetching words
from the Main Memory can be increased.

Ref diagram from text book

 Consecutive words are placed in a module.

 High-order k bits of a memory address determine the module.


 Low-order m bits of a memory address determine the word within a module.

 When a block of words is transferred from main memory to cache, only one module
is busy at a time.

Refer diagram from text book

• Consecutive words are located in consecutive modules.

• Consecutive addresses can be located in consecutive modules.

• While transferring a block of data, several memory modules can be kept busy at the
same time.

Module 4
1.with a neat diagram explain translation hierarchy for c program.

A high-level language program first compiled into as assembly language program and then
assembled into an object module in machine language. The linker combines multiple modules
with library routines to resolve all references. The loader places the machines code into proper
memory locations for execution by the processor.
• A compiler takes our source code and generates the corresponding assembly code.
• An assembler converts the assembly code to the machine code.
• A linker merges all the machine-code modules referenced in our code,
• loader moves the executable to RAM and lets it be executed by a CPU.

2.With a neat diagram explain sequential version of multiplication algorithm and hardware

• The multiplier is in the 32-bit multiplier register and that the 64-bit product register is
initialized to zero.
• 32 bit multiplicand would move 32 bits left. So 64bit multiplicand register is initialized
with the 32 bit multiplicand in the right half and zero in the left half.
• This register is then shifted left 1 bit each step to align multiplicand with the sum being
accumulated in the 64-bit product register

3. Explain multiple forms of addressing in data transfer instruction with example?

These are the addressing modes


There are seven addressing modes
• Register Offset
• Scaled Register Offset
• Immediate pre-indexed
• Immediate post-indexed
• Registered pre-indexed
• Scaled Register pre-Indexed
• Register post-indexed

Register offset

Instead of adding a constant to the base register, another register is added to the base
register. Example LDR r2, [r0, r1]

Scaled Register Offset


This addressing mode allows the register to be shifted before it is added to the base register.
example LDR r2,[r0,r1,LSL #2]
Immediate pre-indexed
On the load instruction, the content of the destination register changes based on the value
fetched from memory and the base register changes to the address that was used to access
memory. Example LDR r2, [r0, #4]
Immediate post-indexed
The address in the base register is used to access memory first and the n constant is added
or subtracted later
Example LDR r2, [r0], #4
Registered pre-indexed
It is same as immediate pre indexed, except add or sub a register instead of constant.
Example LDR r2, [r0, r1]!
Scaled Register pre-Indexed
It same as Register pre-Indexed, expect shift register before adding or subtracting it
Example LDR r2, [r0, r1, LSL #2]!
Register post-indexed
It same as Register post-Indexed, expect add or subtract register instead of constant
Example LDR r2, [r0], r1

4.With a neat diagram explain improved version of division operation algorithm

The divisor register , ALU and quotient register all are 32 bits wide, with only the reminder register left
at 64 bits.
The ALU and Divisor register are halved and the remainder is shifted left.
It is also combines the quotient register with the half of the reminder register.

Module 5

1. Explain three bus-organisation of Data path with a neat block diagram

three buses can be used to connect registers and the ALU of the processor.
• All general-purpose registers are grouped into a single block called the Register File.
• Register-file has 3 ports:
• Two output-ports allow the contents of 2 different registers to be simultaneously
placed on buses A & B.
• Third input-port allows data on bus C to be loaded into a third register during the
same clockcycle.
• Buses A and B are used to transfer source-operands to A & B inputs of ALU.
• The result is transferred to destination over bus C.
• Incrementor Unit is used to increment PC by 4.

Instruction execution proceeds as follows:


• Step 1--> Contents of PC are → passed through ALU using R=B control-signal & → loaded
into MAR to start memory Read operation. At the same time, PC is incremented by 4.
• Step2--> Processor waits for MFC signal from memory.
• Step3--> Processor loads requested-data into MDR, and then transfers them to IR.
2. Explain four Four categories of Flynn’s Taxonomy

3.What is pipeline? Explain the 4 stages of pipeline with its instruction execution steps and
hardware organisation.
Pipeline is the effective way of organizing concurrent activity in the computer
Four instructions are in progress at any given time as shown in the above diagram. This
means that four different hardware units are needed as shown in the diagram

Illustrate the sequence of operations required to execute the following instructions Add(R3),
R1
5.With a neat diagram explain the structure of general purpose multiprocessors.

6.with a neat diagram explain translation hierarchy for java program.


software interpreter, called a Java Virtual Machine(JVM) can execute java byte codes
. The Upside of interpretation is portability.
The downside of interpretation is lower performance.
To preserve portability and improve execution speed, the next phase of java development was
compilers that translated while the program was running.
Such Just In Time Compilers(JIT) typically profile the running program to find where the “hot”
methods are and then compile them in to the native instruction set on which the virtual machine is
running.
As the computer get faster so that compilers can do more and as researchers invent betters ways to
compile Java on the fly.

You might also like