Memory: Introduction To Microprocessor
Memory: Introduction To Microprocessor
Memory: Introduction To Microprocessor
1.1
Introduction
A microprocessor is a multipurpose, programmable, clock-driven, register-based electronic device that reads binary instructions from a storage device called memory, accepts binary data as input and processes data according to those instructions, and provides results as output.
A typical programmable machine can be represented with four components as shown in Figure 1.1. These four components work together or interact with each other as a system to perform a given task. The physical components of this system are called hardware. A set of instructions written for the microprocessor to perform a task is called a program. A group of program is called software. The system represented in Figure 1.1 can be programmed to do various tasks, for example to compute mathematical functions, turn traffic lights on and off, or keep track of a guidance system. The system may be simple of sophisticated, depending on its application. The microprocessor applications can be classified into two categories: reprogrammable systems and embedded systems. In reprogrammable systems, microprocessor is used for computing and data processing. Example of reprogrammable system is personal computer or microcomputer. These systems include general-purpose microprocessors capable of handling large data, mass storage devices and peripherals such as printers. In embedded systems, the microprocessor is part of the final product and is not available for reprogramming to the end users. Embedded systems can also be viewed as products that use microprocessors to perform their operation and are known as microprocessor-based products. Examples of embedded systems are washing machine and copying machine.
1.2
A computer system is built based on three main components: microprocessor, memory and input/output devices. Figure 1.2 shows the block diagram of a computer system. It can be seen that the microprocessor is the main component that controls the system. Memory and input/output devices are supportive component to complete the system. A communication between the microprocessor and peripherals is done via system bus. It is simply a group of wires carrying voltages and currents representing the different bit values.
Memory stores the information needed by the processor. There are two types of memory; Random Access Memory (RAM) and Read Only Memory (ROM). Read Only Memory (ROM) is used to store information that does not change. Random Access Memory (RAM) also known as Read/Write Memory; is used to store information supplied by the user such as programs and data. 1.2.2 Input/Output (I/O) Device
I/O devices are the systems means of communicating with the outside world. These devices are collectively known as peripherals. Input devices transfer binary information from the outside world to the microprocessor. Examples of input devices are the keyboard, mouse, barcode reader and scanner. Output devices transfer binary information from the microprocessor to the outside world. These include things like an LED, monitor, printer, and the like.
1.3
Overview of Microprocessor
The word microprocessor comes from the combination micro and processor. Processor means a device that processes whatever. In this context, processor means a device that processes numbers, specifically binary numbers, 0 and 1. Micro means very small. All of the components that made up the processor are placed on a single piece of silicon. Generally, the microprocessor is a programmable device that takes in numbers, performs on them arithmetic or logical operations according to the program stored in memory and then produces other numbers as a result. Internally, the microprocessor is made up of three main units: Arithmetic/Logic Unit (ALU) Control Unit Array of registers for holding data while it is being manipulated
1.3.1
Performs all computing and logic operations such as addition and subtraction as well as AND, OR and XOR. 1.3.2 Control Unit
As the name implies, the control unit controls what is happening in the microprocessor. It provides the necessary control and timing signals to all operations in the microprocessor as well as its contact to the outside world. The control unit has control lines to each of the microprocessors logic function: ALU, registers, memory, I/O. The timing signal such as clock; provides synchronization for communication in between the components of the microprocessor. It also processes interrupt and power-up sequence. 1.3.3 Register Array
Register array is a collection of registers within the microprocessor itself. These registers are used primarily for data storage during program execution. The number and size of these registers differ from one microprocessor to the other. Registers are fast memory element. Some registers are general-purpose while some are special-purpose. The general-purpose registers are free to be used by the programmer for any purpose. The special-purpose is the one that are used to do specific task such as flag indicator.
1.4
Functions of a Microprocessor
Main function of a microprocessor is data processing. Data processing involves computation and data handling. Computation in microprocessor is done through arithmetic and logic operations using the Arithmetic Logic Unit (ALU) while data handling is done through logic circuits outside ALU to perform movements of data. Aside from data processing, the second function of a microprocessor is control processing. Control processing involves control logic to fetch, decode and execute program. To execute a program, the microprocessor fetches each instruction, decodes it, and then executes it. This sequence is continued until all instructions are performed and come across an instruction to stop. Microprocessor also functions as a coordinator with the rest of the system. Microprocessor coordinates its operation with all the external circuits such as the memory, input and output devices that are connected to the microprocessor. In summary, a microprocessor does calculation, logical operations, program fetch and execution control, and coordination with the rest of the system.
1.5
History Microprocessor was the result of two technologies: Digital computers, starting 1940s for scientific and business work Solid-state circuits, starting 1948 with the invention of the transistor Some important milestones in computing technology: Year 1936 Event In 1936, Konrad Zuse made a mechanical calculator called the Z1, the first binary computer. It is to explore several groundbreaking technologies in calculator development: floating-point arithmetic, high capacity memory and modules or relays operating on the yes/no principle. Professor John Atanasoff and graduate student Clifford Berry built the worlds first electronic-digital computer at Iowa State University. The Atanasoff-Berry Computer (ABC) represented several innovations in computing, including a binary system of arithmetic, parallel processing, regenerative memory, and a separation of memory and computing functions. Howard Aiken and Grace Hopper designed the MARK series of computers at Harvard University. The MARK series of computers began with the Mark I, a giant roomful of noisy, clicking metal parts, 55 feet long and 8 feet high in 1944. The 5-ton device contained almost 760,000 separate pieces. Used by the US Navy for gunnery and ballistic calculations. In 1946, John Mauchly and J. Presper Eckert developed the ENIAC I (Electrical Numerical Integrator and Calculator). The ENIAC contained 17468 vacuum tubes, along with 70000 resistors, 10000 capacitors, 1500 relays, 6000
1942
1944
1946
manual switches and 5 million soldered joints. It covered 167 square meters of floor space, weighed 30 tons and consumed 160 kilowatts of electrical power. 1948 John Bardeen, William Shockley and Walter Brattain, scientists at the Bell Telephone Laboratories invented transistor. The Universal Automatic Computer or UNIVAC was invented by J. Presper Eckert and John Mauchly, the team that invented the ENIAC computer. Development of IBMs 701 EDPM, which was the first commercially successful general-purpose computer. Replaced vacuum tubes with solid-state discrete devices in digital computers. Mainframes and minicomputers were developed. Mainframes general purpose, using batch and timesharing modes of operation. Minicomputers desktop size computers and dedicated to perform a single kind of job. Integrated Circuit (IC) technology was developed then. The first commercially available integrated circuit (IC) came from the Fairchild Semiconductor Corporation. All computers then started to be made using chips instead of the individual transistors and their accompanying parts. Small-scale integration (SSI) and medium-scale integration (MSI) of circuits where developed. Minicomputers using SSI/MSI were as powerful as the mainframes of the 1950s and cheaper than their older brothers. Large-scale integration (LSI) technology was developed. The first microprocessor using LSI technology was built by Intel in 1971 and was called 4004. LSI had reduced the calculator size by using only a single chip instead of several ICs. Intel introduced 8080 processor, a complete 8-bit processor with 16-bit address bus. The first computer to use this chip was the Altair 8800 introduced in 1975. Apple computer was born with the introduction of Apple I using 6502 microprocessor, an 8-bit processor with 16-bit address bus. The chip was designed by Rockwell and produced by MOS Technologies. VLSI technology was common place and a complete 8-bit microprocessor system on a chip (microprocessor with memory and input/output ability) was developed. It was called the microcontroller. IBM personal computer (PC) was the first popular computer introduced for the masses. It uses Intel 8088 microprocessor. With the introduction, the personal computer goes into revolution until these days. The next generations of PCs actually were further improvements on the basic design on 8088.
1951
1953
1961
Mid 1960s
Early 1970s
1976
1980s
1981
1.6
Number System
Computer operates using electrical signal that can be expressed as 0 and 1 in binary system. If there is a signal, 1 is given while 0 is when no signal. In dealing with microprocessor, three number systems are being used: Decimal Binary Hexadecimal 1.6.1 Decimal
Base 10 is used in decimal, meaning that each digit can be represented from 0 to 9 values. Each digit shows the value times power of ten that represented by its bit position. 1.6.2 Binary Base 2 is used in binary, meaning that each digit can be represented only by 0 and 1. 1.6.3 Hexadecimal
Base 16 is used in hexadecimal (or simply hex) where each digit can be represented from 0 to 15. Number 0 to 9 uses the same number as decimal but value 10 to 15 is changed into A to F. Each digit is represented by 4 bits in binary.
Table 1.1: Relation between decimal, binary and hexadecimal numbers Decimal Binary Hexadecimal 0 0000 0 1 0001 1 2 0010 2 3 0011 3 4 0100 4 5 0101 5 6 0110 6 7 0111 7 8 1000 8 9 1001 9 10 1010 A 11 1011 B 12 1100 C 13 1101 D 14 1110 E 15 1111 F
1.7 1.7.1
Conversion Binary to Decimal Example: Convert 1001.012 to decimal number Solution: 1001.012
= (1 2 3 ) (0 2 2 ) (0 21 ) = 8 + 0 + 0 + 1 + 0.25 = 9.2510
(1 2 0 ) (0 2 1 ) (1 2 2 )
1.7.2
1.7.3
Decimal to binary Example 1: Convert 2510 to binary number Solution: Integer 2 2 2 2 2 25 12 6 3 1 0 Remainder
1 0 0 1 1
LSB
MSB
Example 2: Convert 134.37510 to binary number Solution: Integer 2 2 2 2 2 2 2 2 134 67 33 16 18 4 2 1 0 Remainder Carry 0 1 1 0 0 0 0 1 LSB Fraction
MSB
So, 134.37510 = 1000110.0112 1.7.4 Decimal to Hexadecimal Example: Change 634.32812510 to hexadecimal number. Solution: Carry Integer 16 16 16 634 39 2 0 Remainder 10 A 77 22 0.328125 x 16 MSB 5 0.250000 x 16 LSB 4 0.000000 Fraction
LSB MSB
The conversion between binary and hexadecimal is straightforward by grouping 4 bit for each hex character. Example: Convert 10101011111101.01011012 to hexadecimal number. Solution: 0010 2 1010 A 1111 F 1101 D . . 0101 5 1010 A
1.8 1.8.1
+ 1
1 0 1 1 1 1 0 1 1 1 0 1 0 0 1 1 0 0 0 1 0 0 1 0 0 1 1 1 1 0 1 1 0 1 0 0 1 0 1 1 1 1
+ 1 1.8.2 Subtraction
+ 1 Sign
Step 3: If the Sign is 1, the result is a positive number and the magnitude is converted to decimal. 001011012 = + 4510 Example 2: 47 : - 123 : - 76 : 0 0 1 0 1 1 1 1 - 0 1 1 1 1 0 1 1 ?
+ 0 Sign
10
Step 4: Convert the number and put the negative sign 010011002 = - 7610 1.8.3 Multiplication Example: 1 0 1 1 x 1 0 1 1 0 1 1 1 0 0 0 0 1 1 1 1
1.8.4
Division Example: 1 0 0 0 1 1 0 1 0 0 1 0 1 0 0 0 1 1 0 1 0 1 0 1 1 1 0 1 0 0 0 1 0 0 0 0 0 0 0 . 0 1
0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0
11
1.9
Negative Number There are three ways to represent negative numbers in microprocessor: Sign-Magnitude (SM) form Ones Complement (1C) form Twos Complement (2C) form
1.9.1
Sign-Magnitude (SM)
SM uses the most significant bit (MSB) as its sign to indicate positive or negative. Bit value 0 indicates the sign of positive numbers while 1 indicates the negative numbers. 1.9.2 Ones Complement
This system inverts all magnitude bits but sustain the sign bit. Example, 10101010 when ones complemented is 01010101. 1.9.3 Twos Complement
This system is used in subtraction operation. It uses ones complement as the first step, then the result is added by 1 to get the twos complement. Table 1.2: Negative numbers using SM, 1C and 2C form Magnitude Decimal SM 1C 2C 0000000 0 0 0 0 0000001 1 +1 1 1 0000010 2 +2 2 2 .. 1111110 126 +126 126 126 1111111 127 +127 127 127 0000000 128 0 -127 -128 0000001 129 -1 -126 -127 .. 1111110 254 -126 -1 -2 1111111 255 -127 0 -1
Sign 0 0 0 0 0 0 1 1 1 1 1
1.10
Bit is the smallest width of data in computer. Example 1010 is a 4-bit data. A 4-bit data is also called as nibble. 8-bit data is also called as byte. A group of bit that is executed in one time is called a word. It depends on the computer, some have 8-, 16- or 32-bit word. The byte has been used as a unit of measure in microprocessor system. Usually, a memory word is a byte wide. Memories are measured in bytes; kilobytes (K = 210 = 1024 103); megabytes (M = 220 = 1,048,576 106), gigabytes (G = 230 = 1,073,741,824 109), terabytes (T = 240 = 1,099,511,627,776 1012). The nearest power of 2 that comes out to 1000 (so that it becomes 1K) is 210 = 1024. Therefore, 1Kbyte of memory is equal to 1024 memory cell (each can store 8-bit data).
12