Unit 5
Unit 5
Unit 5
Mass-Storage Systems
Overview of Mass Storage Structure
Disk Structure
Disk Attachment
Disk Scheduling
Disk Management
Swap-Space Management
RAID Structure
Stable-Storage Implementation
Objectives
To describe the physical structure of secondary storage devices
and its effects on the uses of the devices
To explain the performance characteristics of mass-storage
devices
To evaluate disk scheduling algorithms
To discuss operating-system services provided for mass storage,
including RAID
Overview of Mass Storage Structure
Magnetic disks provide bulk of secondary storage of modern computers
Drives rotate at 60 to 250 times per second
Transfer rate is rate at which data flow between drive and computer
Positioning time (random-access time) is time to move disk arm to
desired cylinder (seek time) and time for desired sector to rotate
under the disk head (rotational latency)
Head crash results from disk head making contact with the disk
surface -- That’s bad
Disks can be removable
Drive attached to computer via I/O bus
Busses vary, including EIDE, ATA, SATA, USB, Fibre Channel,
SCSI, SAS, Firewire
Host controller in computer uses bus to talk to disk controller built
into drive or storage array
Moving-head Disk Mechanism
Hard Disks
Platters range from .85” to 14” (historically)
Commonly 3.5”, 2.5”, and 1.8”
Range from 30GB to 3TB per drive
Performance
Transfer Rate – theoretical – 6 Gb/sec
Effective Transfer Rate – real –
1Gb/sec
Seek time from 3ms to 12ms – 9ms
common for desktop drives
Average seek time measured or
calculated based on 1/3 of tracks
Latency based on spindle speed
1 / (RPM / 60) = 60 / RPM (From Wikipedia)
Average latency = ½ latency
Hard Disk Performance
Access Latency = Average access time = average seek time +
average latency
For fastest disk 3ms + 2ms = 5ms
For slow disk 9ms + 5.56ms = 14.56ms
Average I/O time = average access time + (amount to transfer /
transfer rate) + controller overhead
For example to transfer a 4KB block on a 7200 RPM disk with a
5ms average seek time, 1Gb/sec transfer rate with a .1ms
controller overhead =
5ms + 4.17ms + 0.1ms + transfer time =
Transfer time = 4KB / 1Gb/s * 8Gb / GB * 1GB / 10242KB =
32 / (10242) = 0.031 ms
Average I/O time for 4KB block = 9.27ms + .031ms =
9.301ms
The First Commercial Disk Drive
1956
IBM RAMDAC computer
included the IBM Model
350 disk storage system
5M (7 bit) characters
50 x 24” platters
Access time = < 1 second
Solid-State Disks
Nonvolatile memory used like a hard drive
Many technology variations
Can be more reliable than HDDs
More expensive per MB
Maybe have shorter life span
Less capacity
But much faster
Busses can be too slow -> connect directly to PCI for example
No moving parts, so no seek time or rotational latency
Magnetic Tape
Was early secondary-storage medium
Evolved from open spools to cartridges
Relatively permanent and holds large quantities of data
Access time slow
Random access ~1000 times slower than disk
Mainly used for backup, storage of infrequently-used data,
transfer medium between systems
Kept in spool and wound or rewound past read-write head
Once data under head, transfer rates comparable to disk
140MB/sec and greater
200GB to 1.5TB typical storage
Common technologies are LTO-{3,4,5} and T10000
Disk Structure
Disk drives are addressed as large 1-dimensional arrays of logical
blocks, where the logical block is the smallest unit of transfer
Low-level formatting creates logical blocks on physical media
The 1-dimensional array of logical blocks is mapped into the sectors
of the disk sequentially
Sector 0 is the first sector of the first track on the outermost
cylinder
Mapping proceeds in order through that track, then the rest of the
tracks in that cylinder, and then through the rest of the cylinders
from outermost to innermost
Logical to physical address should be easy
Except for bad sectors
Non-constant # of sectors per track via constant angular
velocity
Disk Attachment
Host-attached storage accessed through I/O ports talking to I/O
busses
SCSI itself is a bus, up to 16 devices on one cable, SCSI initiator
requests operation and SCSI targets perform tasks
Each target can have up to 8 logical units (disks attached to
device controller)
FC is high-speed serial architecture
Can be switched fabric with 24-bit address space – the basis of
storage area networks (SANs) in which many hosts attach to
many storage units
I/O directed to bus ID, device ID, logical unit (LUN)
Storage Array
Can just attach disks, or arrays of disks
Storage Array has controller(s), provides features to attached
host(s)
Ports to connect hosts to array
Memory, controlling software (sometimes NVRAM, etc)
A few to thousands of disks
RAID, hot spares, hot swap (discussed later)
Shared storage -> more efficiency
Features found in some file systems
Snaphots, clones, thin provisioning, replication,
deduplication, etc
Storage Area Network
Raw disk access for apps that want to do their own block
management, keep OS out of the way (databases for example)
Boot block initializes system
The bootstrap is stored in ROM
Bootstrap loader program stored in boot blocks of boot
partition
Methods such as sector sparing used to handle bad blocks
Booting from a Disk in Windows
Swap-Space Management
Swap-space — Virtual memory uses disk space as an extension of main memory
Less common now due to memory capacity increases
Swap-space can be carved out of the normal file system, or, more commonly, it
can be in a separate disk partition (raw)
Swap-space management
4.3BSD allocates swap space when process starts; holds text segment (the
program) and data segment
Kernel uses swap maps to track swap-space use
Solaris 2 allocates swap space only when a dirty page is forced out of
physical memory, not when the virtual memory page is first created
File data written to swap space until write to file system requested
Other dirty pages go to swap space due to no other home
Text segment pages thrown out and reread from the file system as
needed
What if a system runs out of swap space?
Some systems allow multiple swap spaces
Data Structures for Swapping on Linux Systems
RAID Structure
RAID – redundant array of inexpensive disks
multiple disk drives provides reliability via redundancy
Increases the mean time to failure
Mean time to repair – exposure time when another failure could
cause data loss
Mean time to data loss based on above factors
If mirrored disks fail independently, consider disk with 1300,000
mean time to failure and 10 hour mean time to repair
Mean time to data loss is 100, 0002 / (2 ∗ 10) = 500 ∗ 106
hours, or 57,000 years!
Frequently combined with NVRAM to improve write performance
Several improvements in disk-use techniques involve the use of
multiple disks working cooperatively
RAID (Cont.)
Disk striping uses a group of disks as one storage unit
RAID is arranged into six different levels
RAID schemes improve performance and improve the reliability of
the storage system by storing redundant data
Mirroring or shadowing (RAID 1) keeps duplicate of each
disk
Striped mirrors (RAID 1+0) or mirrored stripes (RAID 0+1)
provides high performance and high reliability
Block interleaved parity (RAID 4, 5, 6) uses much less
redundancy
RAID within a storage array can still fail if the array fails, so
automatic replication of the data between arrays is common
Frequently, a small number of hot-spare disks are left
unallocated, automatically replacing a failed disk and having data
rebuilt onto them
RAID Levels
RAID (0 + 1) and (1 + 0)
Other Features
Regardless of where RAID implemented, other useful features
can be added
Snapshot is a view of file system before a set of changes take
place (i.e. at a point in time)
More in Ch 12
Replication is automatic duplication of writes between separate
sites
For redundancy and disaster recovery
Can be synchronous or asynchronous
Hot spare disk is unused, automatically used by RAID production
if a disk fails to replace the failed disk and rebuild the RAID set if
possible
Decreases mean time to repair
Extensions
RAID alone does not prevent or detect data corruption or other
errors, just disk failures
Solaris ZFS adds checksums of all data and metadata
Checksums kept with pointer to object, to detect if object is the
right one and whether it changed
Can detect and correct data and metadata corruption
ZFS also removes volumes, partitions
Disks allocated in pools
Filesystems with a pool share that pool, use and release
space like malloc() and free() memory allocate /
release calls
ZFS Checksums All Metadata and Data
Traditional and Pooled Storage
Stable-Storage Implementation
44
KERNAL
45
KERNAL
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
• $# Stores the number of command-line
arguments that were passed to
the shell program.
• $? Stores the exit value of the
last command that was executed.
70
71
72
73
74
75
Ms. B KEERTHI SAMHITHA, Asst Prof - CSE 76
77
78
LINUX FILE STRUCTURE
The Linux File Hierarchy Structure or the Filesystem Hierarchy Standard
(FHS) defines the directory structure and directory contents in Unix-like
operating systems.
It is maintained by the Linux Foundation.
In the FHS, all files and directories appear under the root directory /,
even if they are stored on different physical or virtual devices.
Some of these directories only exist on a particular system if certain
subsystems, such as the X Window System, are installed.
Most of these directories exist in all UNIX operating systems and are
generally used in much the same way; however, the descriptions here
are those used specifically for the FHS, and are not considered
authoritative for platforms other than Linux.
79
80
81
Ms. B KEERTHI SAMHITHA, Asst Prof - CSE 82
Ms. B KEERTHI SAMHITHA, Asst Prof - CSE 83
84
85