Mass Storage Systems
Mass Storage Systems
Mass Storage Systems
CHAPTER 4
MASS-STORAGE SYSTEMS
1
OUTLINE
2
OBJECTIVES
Describe the physical structure of secondary storage devices and the effect of a
device’s structure on its uses
Explain the performance characteristics of mass-storage devices
Evaluate I/O scheduling algorithms
Discuss operating-system services provided for mass storage, including RAID
3
OVERVIEW OF MASS STORAGE STRUCTURE
4
I/O SYSTEMS
5
MAGNETIC DISKS
6
DISK SECTOR
7
SOLID-STATE DISKS
9
DISK STRUCTURE
Disk drives are addressed as large 1-dimensional arrays of logical blocks, where the
logical block is the smallest unit of transfer
Low-level formatting creates logical blocks on physical media
The 1-dimensional array of logical blocks is mapped into the sectors of the disk
sequentially
10
MAGNETIC DISKS
Platters range from .85” to 14” (Commonly 3.5”, 2.5”, and 1.8”)
Range from 30GB to 3TB per drive
Performance:
Transfer Rate – theoretical – 6 Gb/sec
Effective Transfer Rate – real – 1Gb/sec
Seek time from 3ms to 12ms (9ms common for desktop drives)
Latency based on spindle speed
1 / (RPM / 60) = 60 / RPM
11
MAGNETIC DISK PERFORMANCE
Access Latency = Average access time = average seek time + average latency
For fastest disk: 3ms + 2ms = 5ms
For common disk: 9ms + 5.56ms = 14.56ms
Average I/O time = average access time + (amount to transfer / transfer rate) +
controller overhead
For example to transfer a 4KB block on a 7200 RPM disk with a 5ms average seek
time, 1Gb/sec transfer rate with a .1ms controller overhead =
5ms + 4.17ms + 0.1ms + transfer time =
Transfer time = 4KB / 1Gb/s = 32 / 10242 = 0.031 ms
Average I/O time for 4KB block = 9.27ms + .031ms = 9.301ms
12
DISK SCHEDULING
The operating system is responsible for using hardware efficiently — for the disk
drives, this means having a fast access time and disk bandwidth
Minimize seek time
Seek time seek distance
Disk bandwidth is the total number of bytes transferred, divided by the total time
between the first request for service and the completion of the last transfer
13
DISK SCHEDULING (CONT.)
14
DISK SCHEDULING (CONT.)
15
1. FCFS SCHEDULING
16
2. SSTF SCHEDULING
Shortest Seek Time First (SSTF) selects the request with the minimum seek time
from the current head position
SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests
Illustration shows total head movement of 236 cylinders
17
3. SCAN SCHEDULING
The disk arm starts at one end of the disk, and moves toward the other end, servicing
requests until it gets to the other end of the disk, where the head movement is
reversed and servicing continues.
SCAN algorithm Sometimes called the elevator algorithm
But note that if requests are uniformly dense, largest density at other end of disk and
those wait the longest
18
3. SCAN SCHEDULING (CONT.)
19
4. C-SCAN SCHEDULING
20
4. C-SCAN SCHEDULING (CONT.)
21
5. LOOK, CLOOK SCHEDULING
22
SELECTING A DISK-SCHEDULING ALGORITHM
23
DISK MANAGEMENT
Before a disk can store data, it must be divided into sectors that the disk controller can read and write.
This process is called low Low-level formatting, or physical formatting
Each sector can hold header information, plus data (Usually 512 bytes of data), plus error correction code
(ECC)
To use a disk to hold files, the operating system still needs to record its own data structures on the disk
Partition the disk into one or more groups of cylinders, each treated as a logical disk
Logical formatting or “making a file system”:
OS stores the initial file-system data structures
Include maps of free and allocated space
24
DISK MANAGEMENT (CONT.)
Raw disk access for apps that want to do their own block management, keep OS out
of the way (databases for example)
Boot block initializes system
The bootstrap is stored in ROM
The problem is that changing this bootstrap code requires changing the ROM hardware chips.
For this reason, most systems store a tiny bootstrap loader program in the boot
ROM whose only job is to bring in a full bootstrap program from disk
Disk that has a boot partition is called a boot disk or system disk.
25
BAD BLOCKS
Because disks have moving parts and small tolerances (recall that the disk head flies just above the disk
surface), they are prone to failure
More frequently, one or more sectors become defective. Most disks even come from the factory with
bad blocks
Depending on the disk and controller in use, these blocks are handled in a variety of ways
One strategy is to scan the disk to find bad blocks while the disk is being formatted.
Any bad blocks that are discovered are flagged as unusable so that the file system does not allocate
them
A special program (such as the Linux badblocks command) must be run to search for the bad blocks
and to lock them away
Data that resided on the bad blocks usually are lost
26
SWAP-SPACE MANAGEMENT
27
RAID STRUCTURE
RAID within a storage array can still fail if the array fails, so automatic replication of the data
between arrays is common
Frequently, a small number of hot-spare disks are left unallocated, automatically replacing a failed disk
and having data rebuilt onto them
29
RAID Levels
30
RAID 0
31
RAID 1
32
RAID 2
33
RAID 3
34
RAID 4
35
RAID 5
36
RAID 10
37