What is the difference between distributed mutual exclusion enforced by a centralized algorithm and enforced by a distributed algorithm?
> Why is the principle of locality crucial to the use of virtual memory?
> Explain thrashing.
> What is the difference between simple paging and virtual memory paging?
> What is the distinction between blocking and nonblocking with respect to messages?
> What is a monitor?
> Because we have standards such as TCP/IP, why is middleware needed?
> What is the difference between strong and weak semaphores?
> What is the difference between a page and a segment?
> What is the difference between a page and a frame?
> What are the distinctions among logical, relative, and physical addresses?
> What is the difference between internal and external fragmentation?
> In a fixed partitioning scheme, what are the advantages of using unequal-size partitions?
> What are some reasons to allow two or more processes to all have access to a particular region of memory?
> Why is it not possible to enforce memory protection at compile time?
> Why is the capability to relocate processes desirable?
> What requirements is memory management intended to satisfy?
> What is middleware?
> What is the difference between binary and general semaphores?
> What operations can be performed on a semaphore?
> What is the difference among deadlock avoidance, detection, and prevention?
> How can the circular wait condition be prevented?
> List two ways in which the no-preemption condition can be prevented.
> How can the hold-and-wait condition be prevented?
> What are the four conditions that create deadlock?
> What are the three conditions that must be present for deadlock to be possible?
> Give examples of reusable and consumable resources.
> List the requirements for mutual exclusion.
> Explain the rationale behind the three-tier client/server architecture.
> List the three control problems associated with competing processes, and briefly define each
> What is the distinction between competing processes and cooperating processes?
> List three degrees of awareness between processes and briefly define each.
> What is the basic requirement for the execution of concurrent processes?
> What are three contexts in which concurrency arises?
> List four design issues for which the concept of concurrency is relevant.
> Give three examples of an interrupt.
> What is the difference between an interrupt and a trap?
> What are the steps performed by an OS to create a new process?
> Why are two modes (user and kernel) needed?
> Define the two types of distributed deadlock.
> List three general categories of information in a process control block.
> Define jacketing.
> List two disadvantages of ULTs compared to KLTs.
> List three advantages of ULTs over KLTs.
> What resources are typically shared by all of the threads of a process?
> Give four general examples of the use of threads in a single user multiprocessing system.
> What are the two separate and potentially independent characteristics embodied in the concept of process?
> List reasons why a mode switch between threads may be cheaper than a mode switch between processes.
> Table 3.5 lists typical elements found in a process control block for an unthreaded OS. Of these, which should belong to a thread control block, and which should belong to a process control block for a multithreaded system? Table 3.5: Process Identi
> For what types of entities does the OS maintain tables of information for management purposes?
> List four characteristics of a suspended process.
> Why does Figure 3.9b have two blocked states? Figure 3.9b: New Suspend Activate Dispatch Release Ready/ Suspend Ready Running Exit Suspend Time-out Activate Blocked/ Suspend Blocked Suspend (b) With two Suspend states Event upy occurs Admit Event oc
> What is swapping and what is its purpose?
> What does it mean to preempt a process?
> For the processing model of Figure 3.6, briefly define each state. Figure 3.6: Dispatch Admit Release New Ready Running Exit Time-out Event Event occurs wait Blocked
> What common events lead to the creation of a process?
> Generalize Equations (1.1) and (1.2) in Appendix 1A to n-level memory hierarchies.
> Directories can be implemented either as “special files” that can only be accessed in limited ways or as ordinary data files. What are the advantages and disadvantages of each approach?
> What are the advantages of using directories?
> Ignoring overhead for directories and file descriptors, consider a file system in which files are stored in blocks of 16K bytes. For each of the following file sizes, calculate the percentage of wasted file space due to incomplete filling of the last blo
> Both the search and the insertion time for a B-tree are a function of the height of the tree. We would like to develop a measure of the worst-case search or insertion time. Consider a B-tree of degree d that contains a total of n keys. Develop an inequal
> An alternative algorithm for insertion into a B-tree is the following: As the insertion algorithm travels down the tree, each full node that is encountered is immediately split, even though it may turn out that the split was unnecessary. a. What is the a
> For the B-tree in Figure 12.4c, show the result of inserting the key 97. Figure 12.4: Key, Key, Key- Subtree; Subtree, Subtree Subtree, Subree, Figure 12.4 A B-tree Node with k Children
> What file organization would you choose to maximize efficiency in terms of speed of access, use of storage space, and ease of updating (adding/deleting/modifying) when the data are: a. updated infrequently and accessed frequently in random order? b. upda
> What is an instruction trace?
> One scheme to avoid the problem of preallocation versus waste or lack of contiguity is to allocate portions of increasing size as the file grows. For example, begin with a portion size of one block, and double the portion size for each allocation. Consid
> Some operating systems have a tree–structured file system but limit the depth of the tree to some small number of levels. What effect does this limit have on users? How does this simplify file system design (if it does)?
> Define: B=block size R=record size P=size of block pointer F=blocking factor; expected number of records within a block Give a formula for F for the three blocking methods depicted in Figure 12.8 Figure 12.8: Record I Record 2 Record 3 Record 4 Trac
> Repeat the preceding problem using DMA, and assume one interrupt per sector. Data from problem 11.8: There are 512 bytes/sector. Since each byte generates an interrupt, there are 512 interrupts. Total interrupt processing time = 2.5 × 512 = 1280 µs. The
> Consider the disk system described in Problem 11.7, and assume the disk rotates at 360 rpm. A processor reads one sector from the disk using interrupt-driven I/O, with one interrupt per byte. If it takes to process each interrupt, what percentage of the
> Calculate how much disk space (in sectors, tracks, and surfaces) will be required to store 300,000 120-byte logical records if the disk is fixed sector with 512 bytes/sector, with 96 sectors/track, 110 tracks per surface, and 8 usable surfaces. Ignore an
> For the frequency-based replacement algorithm (see Figure 11.9), define Fnew, Fmiddle and Fold as the fraction of the cache that comprises the new, middle, and old sections, respectively. Clearly, Fnew+Fmiddle+Fold=1. Characterize the policy when a. Fold
> The following equation was suggested both for cache memory and disk cache memory: TS=TC+M×TD Generalize this equation to a memory hierarchy with N levels instead of just 2
> Consider a disk with N tracks numbered from 0 to (N-1) and assume requested sectors are distributed randomly and evenly over the disk. We want to calculate the average number of tracks traversed by a seek. a. Calculate the probability of a seek of length
> a. Perform the same type of analysis as that of Table 11.2 for the following sequence of disk track requests: 27, 129, 110, 186, 147, 41, 10, 64, 120. Assume the disk head is initially positioned over track 100 and is moving in the direction of decreasin
> In general terms, what are the four distinct actions that a machine instruction can specify?
> Generalize the result of Problem 11.1 to the case in which a program refers to n devices. Result of Problem 11.1: If the calculation time exactly equals the I/O time (which is the most favorable situation), both the processor and the peripheral device r
> An interactive system using round-robin scheduling and swapping tries to give guaranteed response to trivial requests as follows. After completing a round-robin cycle among all ready processes, the system determines the time slice to allocate to each rea
> Consider a 4-drive, 200 GB-per-drive RAID array. What is the available data storage capacity for each of the RAID levels, 0, 1, 3, 4, 5, and 6?
> It should be clear that disk striping can improve the data transfer rate when the strip size is small compared to the I/O request size. It should also be clear that RAID 0 provides improved performance relative to a single large disk, because multiple I/
> A 32-bit computer has two selector channels and one multiplexor channel. Each selector channel supports two magnetic disk and two magnetic tape units. The multiplexor channel has two line printers, two card readers, and ten VDT terminals connected to it.
> Consider a program that accesses a single I/O device and compare unbuffered I/O to the use of a buffer. Show that the use of the buffer can reduce the running time by at most a factor of two.
> Define residence time Tr as the average total time a process spends waiting and being served. Show that for FIFO, with mean service time Ts, we have Tr=Ts/(1−ρ), where is utilization.
> Draw a diagram similar to that of Figure 10.9b that shows the sequence events for this same example using priority ceiling. Figure 10.9b: Blocked by T, (attempt to lock s) s locked T2 Preempted by T Preempted by T; s unlocked s locked T3 14 I5 Time
> This problem demonstrates that although Equation (10.2) for rate monotonic scheduling is a sufficient condition for successful scheduling, it is not a necessary condition (i.e., sometimes successful scheduling is possible even if Equation (10.2) is not s
> Repeat Problem 10.4, adding MUF to the diagrams. Comment on the results. Data from Problem10.4: d. Consider a set of three periodic tasks with the execution profiles of Table 10.9a. Develop scheduling diagrams similar to those of Figure 10.5 for this set
> Define the two main categories of processor registers.
> Maximum-urgency-first (MUF) is a real-time scheduling algorithm for periodic tasks. Each task is assigned an urgency that is defined as a combination of two fixed priorities and one dynamic priority. One of the fixed priorities, the criticality, has prec
> Repeat Problem 10.3d for the execution profiles of Table 10.9b. Comment on the results. Data from Problem 10.3d: d. Consider a set of three periodic tasks with the execution profiles of Table 10.9a. Develop scheduling diagrams similar to those of Figure
> Least-laxity-first (LLF) is a real-time scheduling algorithm for periodic tasks. Slack time, or laxity, is the amount of time between when a task would complete if it started now and its next deadline. This is the size of the available scheduling window.
> Consider a set of five aperiodic tasks with the execution profiles of Table 10.8. Develop scheduling diagrams similar to those of Figure 10.6 for this set of tasks. Table 10.8: Figure 10.6: Table 10.8 Execution Profile for Problem 10.2 Process Arriv
> In a queuing system, new jobs must wait for a while before being served. While a job waits, its priority increases linearly with time from zero at a rate A job waits until its priority reaches the priority of the jobs in service; then, it begins to share
> Consider a variant of the RR scheduling algorithm where the entries in the ready queue are pointers to the PCBs. a. What would be the effect of putting two pointers to the same process in the ready queue? b. What would be the major advantage of this sche
> A processor is multiplexed at infinite speed among all processes present in a ready queue with no overhead. (This is an idealized model of round-robin scheduling among ready processes using time slices that are very small compared to the mean service tim
> Consider a set of three periodic tasks with the execution profiles of Table 10.7. Develop scheduling diagrams similar to those of Figure 10.5 for this set of tasks. Table 10.7: Figure 10.5: Table 10.7 Execution Profile for Problem 10.1 Process Arriv
> Prove that the minimax response ratio algorithm of the preceding problem minimizes the maximum response ratio for a given batch of jobs. (Hint: Focus attention on the job that will achieve the highest response ratio and all jobs executed before it. t1,
> In a nonpreemptive uniprocessor system, the ready queue contains three jobs at time t immediately after the completion of a job. These jobs arrived at times t1, t2, and t3 with estimated execution times of and respectively. Figure 9.18 shows the linear i
> List and briefly define the four main elements of a computer.
> Why is it impossible to determine a true global state?
> In the bottom example in Figure 9.5 , process A runs for two time units before control is passed to process B. Another plausible scenario would be that A runs for three time units before control is passed to process B. What policy differences in the feed
> Consider the following pair of equations as an alternative to Equation 9.3: Sn+1=αTn+(1−α)SnXn+1= min [Ubound, max[Lbound, (βSn+1)]] where Ubound and Lbound are prechosen upper and lower bounds on the estimated value of T. The value Xn+1 of is used in th