Nothing Special   »   [go: up one dir, main page]

Mit 101 Activity2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Operating System and Networking

CARLO B. CORPUZ

MIT 101

ACTIVITY 2

1. In Fig. 2-8, a multithreaded Web server is shown. If the only way to read from a file is the normal
blocking read system call, do you think user-level threads or kernel-level threads are being used
for the Web server? Why?
- A worker thread will block when it has to read a Web page from the disk. If user-
level threads are being used, this action will block the entire process, destroying the
value of multithreading. Thus it is essential that kernel threads are used to permit
some threads to block without affecting the others.

2. In the text, we described a multithreaded Web server, showing why it is better than a single-
threaded server and a finite-state machine server. Are there any circumstances in which a
single-threaded server might be better? Give an example.
- Yes. If the server is entirely CPU bound, there is no need to have multiple threads. It
just adds unnecessary complexity. As an example, consider a telephone directory
assistance number (like 555-1212) for an area with 1 million people. If each (name,
telephone number) record is, say, 64 characters, the entire database takes 64
megabytes and can easily be kept in the server's memory to provide fast lookup.

3. If a multithreaded process forks, a problem occurs if the child gets copies of all the parent's
threads. Suppose that one of the original threads was waiting for keyboard input. Now two
threads are waiting for keyboard input, one in each process. Does this problem ever occur in
single-threaded processes?
- No. If a single-threaded process is blocked on the keyboard, it cannot fork

4. In the text it was stated that the model of Fig. 2-11(a) was not suited to a file server using a
cache in memory. Why not? Could each process have its own cache?
- It would be difficult, if not impossible, to keep the file system consistent. Suppose
that a client process sends a request to server process 1 to update a file. This
process updates the cache entry in its memory. Shortly thereafter, another client
process sends a request to server 2 to read that file. Unfortunately, if the file is also
cached there, server 2, in its innocence, will return obsolete data. If the first process
writes the file through to the disk after caching it, and server 2 checks the disk on
every read to see if its cached copy is up-to-date, the system can be made to work,
but it is precisely all these disk accesses that the caching system is trying to avoid.
5. Assume that you are trying to download a large 2-GB file from the Internet. The file is available
from a set of mirror servers, each of which can deliver a subset of the file’s bytes; assume that a
given request specifies the starting and ending bytes of the file. Explain how you might use
threads to improve the download time.
- The client process can create separate threads; each thread can fetch a different
part of the file from one of the mirror servers. This can help reduce downtime. Of
course, there is a single network link being shared by all threads. This link can
become a bottleneck as the number of threads becomes very large.

6. Consider a multiprogrammed system with degree of 6 (i.e., six programs in memory at the same
time). Assume that each process spends 40% of its time waiting for I/O. What will be the CPU
utilization?
- The probability that all processes are waiting for I/O is 0. 4^6 which is 0.004096.
Therefore, CPU utilization = 1 − 0. 004096 = 0: 995904.

7. Multiple jobs can run in parallel and finish faster than if they had run sequentially. Suppose that
two jobs, each needing 20 minutes of CPU time, start simultaneously. How long will the last one
take to complete if they run sequentially? How long if they run in parallel? Assume 50% I/O wait.
- If each job has 50% I/O wait, then it will take 40 minutes to complete in the absence
of competition. If run sequentially, the second one will finish 80 minutes after the
first one starts. With two jobs, the approximate CPU utilization is 1 − 0. 5^2. Thus,
each one gets 0.375 CPU minute per minute of real time. To accumulate 20 minutes
of CPU time, a job must run for 20/0.375 minutes, or about 53.33 minutes. Thus
running sequentially the jobs finish after 80 minutes, but running in parallel they
finish after 53.33 minutes.

8. A computer has 4 GB of RAM of which the operating system occupies 512 MB. The processes are
all 256 MB (for simplicity) and have the same characteristics. If the goal is 99% CPU utilization,
what is the maximum I/O wait that can be tolerated?
- There is enough room for 14 processes in memory. If a process has an I/O of p, then
the probability that they are all waiting for I/O is p^14. By equating this to 0.01, we
get the equation p^14 = 0. 01. Solving this, we get p = 0. 72, so we can tolerate
processes with up to 72% I/O wait.

9. A computer system has enough room to hold fiv e programs in its main memory. These
programs are idle waiting for I/O half the time. What fraction of the CPU time is wasted?
- The chance that all five processes are idle is 1/32, so the CPU idle time is 1/32.

10. When an interrupt or a system call transfers control to the operating system, a kernel stack area
separate from the stack of the interrupted process is generally used. Why?
- There are several reasons for using a separate stack for the kernel. Two of them are
as follows. First, you do not want the operating system to crash because a poorly
written user program does not allow for enough stack space. Second, if the kernel
leaves stack data in a user program's memory space upon return from a system call,
a malicious user might be able to use this data to find out information about other
processes.

You might also like