12 December 2015
The reason Shortest-Seek-Time-First (SSTF) programing tends does favor the cylinders in the middle over the outermost and innermost cylinders are first to take the most of the throughput; that is to have as numerous requests as possible serviced for each unit time. Apart from that, the favoritism also helps in minimizing the average time for responses. This statement means that it provides a certain level of predictability. The central disk section has the smallest mean distance to every other track. Therefore, after having the first request serviced, the algorithm is most likely going to be closer to the center track than to any other specified track. The algorithm would, therefore, most probably go to the center track after servicing the first request. Once it gets to a specific track, SSTF has a tendency of keeping the head near this particular track in question. Therefore, the SSTF scheduling would always have the initial affinity towards the middle cylinders (“Galvin exercise” n.pag).
Rational latency is normally ignored in “disk scheduling” because many disks never transfer the information about their rational position to “the host.” But were they to transfer the data, the time it would take this information to get to “the scheduler” would still not be precise. This period utilized by the scheduler is a variable as well. This observation basically means that the “rational position information” would not be correct. Furthermore, the “disk requests” are often issued regarding “logical block numbers.” This form of issuance necessitates mapping between the physical locations and logical blocks. This mapping process is very complex (“Galvin exercise” n.pag).
(a) For 512 bytes, effective transfer rate (ETR) is given by:
“Transfer size” divided by the time of transfer.
I f the transfer size is X; the transfer time is then given by (latency + (X/STR)).
From the question, the transfer time is given by the sum of the quotient of 512 bytes and 5 megabytes per second and 15 milliseconds = 15.0001 milliseconds.
The “effective transfer rate” should, therefore, be given by 512 bytes over the calculated transfer time (15.0001ms) = 33.13 kilobytes per second.
For 8KB, the ETR = 0.48 megabytes per second.
For 1MB, the ETR = 4. 7 megabytes per second.
For 16MB, the ETR = 4.99 megabytes per second.
(b) The device utilization for 512 bytes = 33.13 KBPS (kilobytes per second)/ 5MBPS (megabytes per second) = 0.0065 = .65
Device utilization for 8KB = 9.5%
Device utilization for 1MB = 94%
Device utilization for 16MB = 99. 8%
(C) The smallest transfer size for the disk that provides adequate utilization.
25% = ETR/STR (streaming transfer rate)
STR = 5MB, therefore, ETR = 0.25 * 5MB = 1.25 MBPS
1.25 MBPS * ((X/5) + 0.015 = X
X/1.25 MBPS = 0.2X + 0.015
0.8X – 0.2X = 0.015
X = 0.025 megabytes
(d) “A disk is a random-access device for transfers larger than the disk block size bytes and is a sequential-access device for smaller transfers.”
(e) Find the minimum sizes of transfers that provide adequate utilization of memory, cache, and tape.
First for cache: STR = 800 megabytes, ETR = 200 and latency = 8 * 10-9.
200 ((X MB/800) + 10-9) = X MB
0.25X MB + 1600 * 10-9 = X MB
X = 2.25 B
For tape: STR = 2 megabytes, L = 60 seconds and ETR = 0.5
0.5 ((X/2) + 60) = X
0.5 (0.5X + 60) = X
0.25X + 30 = X
X = 40 megabytes.
For memory: STR = 80 MB, L = 60 * 10-9 and ETR = 20
20 ((X/80) + 60 * 10-9) = X
0.25X + 1200 * 10 – 9 = X
X = 1.69 B
(f) “When is it a sequential access device and when is it a tape random-access device.”
This distinction depends on how the device is put to use. One might be using the tape to restore their back up. In this case, the tape plays the role of a “sequential-access” device where one sequentially reads what is contained inside the tape. In another example, one may be assumed to be using the tape to gain access to a variety of data kept in the interior of the tape. For the second instance, the admittance to the tape is uninformed and for this reason, it is considered random (“Galvin exercise” n.pag).
(4) Yes a “RAID 1” can perform better than a “RAID level 0 organization.” Depending on the input/output (IO) load nature, the read performance of a “RAID level 1 organization” may be equal to the sum total of the performance of each member. In this case, its performance will be higher than that of a “RAID 0.”
It is true that no other disk-scheduling disciplines apart from “FCFS” is truthfully unbiased because with FCFS, “new requests for the track over which the head” presently resides are able to arrive as fast as the demands are serviced (“Mass storage structure” 79).
Every request older than a certain specified age can be forcefully made to go to the front of the line. After that, one could set an “associated bit” to show that no new requests can be pushed in front of these requests. It would be necessary to reorganize the remainder of the queue with regards to the last of these “old requests” in the case of SSTF (“Mass storage structure” 81).
Fairness is a significant aim in a time-sharing arrangement because it helps avoid abnormally “long response times” (“Mass storage structure” 82).
Unfairness in attending to “I/O requests” may be necessary: where there are swapping and paging; these two activities should be given more priority than the other requests. Kernel-instigated input and output operations like writing the metadata of file systems are also preferred to come first before user Input/ Output. If the kernel in question maintains immediate process priorities, the input and output requests of those should be considered first.
(6) Total distance moved to fulfill all awaiting requests.
The schedule for FCFs “is 143, 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130.” Entire distance moved by disk arm is 7081.
The SSTF program “is 143, 130, 86, 913, 948, 1022, 1470, 1509, 1750, 1774.” The total distance moved by the disk arm here is 1745.
The schedule for SCAN “is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 130, 86.” The total distance moved by the disk arm is9769.
The LOOK program “is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 130, 86.” The total distance moved by the disk arm is 9813.
The schedule for C-SCAN “is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 86, 130.” The total distance moved by the disk arm is 9813.
The C-LOOK program “is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 86, 130.” The total distance moved by the disk arm is 3363.
(7) Comparison between the performance of SCAN and C-SCAN scheduling.
C-LOOK is normally observed to have a mean time of response slightly higher than LOOK. However, C-LOOK has a substantially lower variance in time of response for heavy and medium loads of work. This variation arises from the fact “that LOOK and SCAN” favor requests close to the “central cylinders.” The C-LOOK AND C-SCAN, on the other hand, do not experience this inequity. The slow time of response for the C-versions is occasioned by the circular nature of its seek. This circular seek satisfies no requests.
It is observed that the “algorithms do not” improve the “rational latency.” Therefore, as the “seek times” diminish, the change in performance between the algorithms will lessen.
SSTF would be principally correct for this situation. FCFS would bring about unnecessary head movements if one intersperses the high demand cylinders with reference to distant cylinders.
The hot data can be placed around the middle part of the disk. The SSTF can then be modified to avoid starvation. If the disk stays idle for at least fifty milliseconds, an “anticipatory seek” is produced by the hot region because the coming request will most probably be there (“Mass storage structure” 84).
(9) RAID level 5 with five disks with sets of blocks stored on the five disks.
How many blocks are accessed in order to write of one data block?
To write of one data block, one needs to: first read from the “parity block,” “read of the old data from the target block,” and “write of the target and parity blocks.”
How many blocks are accessed to write of seven unbroken data blocks?
First, one needs to assume that the seven blocks begin at a boundary of four blocks. The write of the seven blocks could be executed by first writing the “parity block” of the starting four blocks. After this operation, one then needs to read the eighth block and compute the parity block for the succeeding set of four blocks. Finally, one has to write the matching parity block against the disk.
(10) “Reliability of a hard disk drive.”
Divide the 750, 000 “drive-hours” by the number of drives which is 1000. This division gives 750 hours for every failure. 750 divided by 24 hours in a day gives 31 days. Therefore, a drive failure will occur once a month in the disk farm.
The number of hours for each failure is 8760 hours. Divide these hours by 0.001 to get a figure of 8.76 million for MTBF (“mean time between failures”). These hours are equivalent to 1000 years. Logically speaking, this number does not give any significant information about the expected life of person aged 20.
A hard disk is normally made to last for about five years. If this type of drive has an MTBF of a million hours then it is obvious that it should not be expected to fail during its anticipated lifetime.
“Mass Storage Structure.” Web. 12 Dec. 2015.
“Galvin Exercise Sol-2.” Prep4placement. 2 Sept. 2012. Web. 12 Dec. 2015.