ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • 운영체제 연습문제 풀이 (in English)
    정리필요2 2008. 9. 1. 23:10
    1.1 What are the three main purposes of an operating system?
    
        Answer: To provide an environment for a computer user to execute
        programs on computer hardware in a convenient and efficient
        manner.  To allocate the separate resources of the computer as
        needed to solve the problem given. The allocation process should
        be as fair and efficient as possible.  As a control program it
        serves two major functions: (1) supervision of the execution of
        user programs to prevent errors and improper use of the computer,
        and (2) manage- ment of the operation and control of I/O devices.
    
    1.3 What is the main advantage of multiprogramming?
    
        Answer: Multiprogramming makes efficient use of the CPU by
        overlapping the demands for the CPU and its I/O devices from
        various users. It attempts to increase CPU utilization by always
        having something for the CPU to execute.
    
    1.5 In a multiprogramming and time-sharing environment, several users
        share the system si- multaneously. This situation can result in
        various security problems.
          a. What are two such problems?
          b. Can we ensure the same degree of security in a time-shared
             machine as we have in a dedicated machine? Explain your
             answer.
    
        Answer:
          a. Stealing or copying one's programs or data; using system
             resources (CPU, memory, disk space, peripherals) without
             proper accounting.
          b. Probably not, since any protection scheme devised by humans
             can inevitably be bro- ken by a human, and the more complex
             the scheme, the more difficult it is to feel confident of its
             correct implementation.
    
    
    1.6 Define the essential properties of the following types of
      operating systems
          a. Batch
          b. Interactive
          c. Time sharing
          d. Real time
          e. Network
          f. Distributed
    
        Answer:
          a. Batch. Jobs with similar needs are batched together and run
             through the computer as a group by an operator or automatic
             job sequencer. Performance is increased by attempting to keep
             CPU and I/O devices busy at all times through buffering,
             off-line operation, spooling, and multiprogramming. Batch is
             good for executing large jobs that need little interaction;
             it can be submitted and picked up later.
          b. Interactive. This system is composed of many short
             transactions where the results of the next transaction may be
             unpredictable. Response time needs to be short (seconds)
             since the user submits and waits for the result.
          c. Time sharing. This systems uses CPU scheduling and
             multiprogramming to provide economical interactive use of a
             system. The CPU switches rapidly from one user to
             another. Instead of having a job defined by spooled card
             images, each program reads its next control card from the
             terminal, and output is normally printed immediately to the
             screen.
          d. Real time. Often used in a dedicated application, this system
             reads information from sensors and must respond within a
             fixed amount of time to ensure correct perfor- mance.
          e. Network.
          f. Distributed.This system distributes computation among several
             physical processors.  The processors do not share memory or a
             clock. Instead, each processor has its own local memory. They
             communicate with each other through various communication
             lines, such as a high-speed bus or telephone line.
    
    1.10 What is the main difficulty that a programmer must overcome in
         writing an operating system for a real-time environment?
    
         Answer: The main difficulty is keeping the operating system
         within the fixed time con- straints of a real-time system. If the
         system does not complete a task in a certain time frame, it may
         cause a breakdown of the entire system it is running. Therefore
         when writ- ing an operating system for a real-time system, the
         writer must be sure that his scheduling schemes don't allow
         response time to exceed the time constraint.
    
    1.11 Consider the various definitions of operating system. Consider
         whether the operating system should include applications such
         as Web browsers and mail programs. Argue both that it should and
         that it should not, and support your answer.
    
         Answer: We will assume these applications are tightly coupled to
         the kernel of the OS.
    
         It should include these applications because if they are
         important applications it may be useful to make it easy to supply
         some of their functionality to other programs through system
         libraries.  Also tightly coupling the applications with the
         kernel can increase the speed of these important applications.
    
         It should not because the OS needs to me modular with clean
         interfaces to reduce bugs and security problems.
    
    2.3 What are the differences between a trap and an interrupt? What is
        the use of each function?
    
        Answer: An interrupt is a hardware-generated change-of-flow within
        the system. An interrupt handler is summoned to deal with the
        cause of the interrupt; control is then re- turned to the
        interrupted context and instruction. A trap is a
        software-generated interrupt.  An interrupt can be used to signal
        the completion of an I/O to obviate the need for device polling. A
        trap can be used to call operating system routines or to catch
        arithmetic errors.
    
    2.4 For what types of operations is DMA useful? Explain your answer.
    
        Answer: DMA is useful for transferring large quantities of data
        between memory and devices. It eliminates the need for the CPU to
        be involved in the transfer, allowing the transfer to complete
        more quickly and the CPU to perform other tasks concurrently.
    
    2.5 Which of the following instructions should be privileged?
          a. Set value of timer.
          b. Read the clock.
          c. Clear memory.
          d. Turn off interrupts.
          e. Switch from user to monitor mode.
    
        Answer: The following instructions should be privileged:
          a. Set value of timer.
          b. Clear memory.
          c. Turn off interrupts.
          d. Switch from user to monitor mode.
    
    2.6 Some computer systems do not provide a privileged mode of
        operation in hardware. Con- sider whether it is possible to
        construct a secure operating system for these computers.  Give
        arguments both that it is and that it is not possible.
    
        Answer: An operating system for a machine of this type would need
        to remain in control (or monitor mode) at all times. This could be
        accomplished by two methods:
          a. Software interpretation of all user programs (like some
             BASIC, APL, and LISP sys- tems, for example). The software
             interpreter would provide, in software, what the hardware
             does not provide.
          b. Require meant that all programs be written in high-level
             languages so that all ob- ject code is compiler-produced. The
             compiler would generate (either in-line or by function calls)
             the protection checks that the hardware is missing.
    
    2.7 Some early computers protected the operating system by placing it
        in a memory partition that could not be modified by either the
        user job or the operating system itself. Describe two difficulties
        that you think could arise with such a scheme.
    
        Answer: The data required by the operating system (passwords,
        access controls, account- ing information, and so on) would have
        to be stored in or passed through unprotected memory and thus be
        accessible to unauthorized users.
    
    2.8 Protecting the operating system is crucial to ensuring that the
        computer system operates correctly. Provision of this protection
        is the reason behind dual-mode operation, memory protection, and
        the timer. To allow maximum flexibility, however, we would also
        like to place minimal constraints on the user.  The following is a
        list of operations that are normally protected. What is the
        minimal set of instructions that must be protected?
    
        a. Change to user mode.
        b. Change to monitor mode.
        c. Read from monitor memory.
        d. Write into monitor memory.
        e. Fetch an instruction from monitor memory.
        f. Turn on timer interrupt.
        g. Turn off timer interrupt.
    
        Answer: The minimal set of instructions that must be protected are:
        a. Change to monitor mode.
        b. Read from monitor memory.
        c. Write into monitor memory.
        d. Turn off timer interrupt.
    
    3.6 List five services provided by an operating system. Explain how
        each provides conve- nience to the users. Explain also in which
        cases it would be impossible for user-level pro- grams to provide
        these services.
    
        Answer:
    
            Program execution. The operating system loads the contents (or
            sections) of a file into memory and begins its execution. A
            user-level program could not be trusted to properly allocate
            CPU time.
    
            I/O operations. Disks, tapes, serial lines, and other devices
            must be communicated with at a very low level. The user need
            only specify the device and the operation to perform on it,
            while the system converts that request into device- or
            controller-specific commands. User-level programs cannot be
            trusted to only access devices they should have access to and
            to only access them when they are otherwise unused.
    
            File-system manipulation. There are many details in file
            creation, deletion, allocation, and naming that users should
            not have to perform. Blocks of disk space are used by files
            and must be tracked. Deleting a file requires removing the
            name file information and freeing the allocated
            blocks. Protections must also be checked to assure proper file
            access. User programs could neither ensure adherence to
            protection methods nor be trusted to allocate only free blocks
            and deallocate blocks on file deletion.
    
            Communications. Message passing between systems requires
            messages be turned into packets of information, sent to the
            network controller, transmitted across a com- munications
            medium, and reassembled by the destination system. Packet
            ordering and data correction must take place. Again, user
            programs might not coordinate ac- cess to the network device,
            or they might receive packets destined for other processes.
    
            Error detection. Error detection occurs at both the hardware
            and software levels. At the hardware level, all data transfers
            must be inspected to ensure that data have not been corrupted
            in transit. All data on media must be checked to be sure they
            have not changed since they were written to the media. At the
            software level, media must be checked for data consistency;
            for instance, do the number of allocated and unallocated
            blocks of storage match the total number on the device. There,
            errors are frequently process-independent (for instance, the
            corruption of data on a disk), so there must be a global
            program (the operating system) that handles all types of
            errors. Also, by having errors processed by the operating
            system, processes need not contain code to catch and correct
            all the errors possible on a system.
    
    3.7 What is the purpose of system calls?
    
        Answer: System calls allow user-level processes to request
        services of the operating sys- tem.
    
    3.15 Why is the separation of mechanism and policy a desirable property?
    
         Answer: Mechanism and policy must be separate to ensure that
         systems are easy to modify. No two system installations are the
         same, so each installation may want to tune the operating system
         to suit its needs. With mechanism and policy separate, the policy
         may be changed at will while the mechanism stays unchanged. This
         arrangement provides a more flexible system.
    
    4.1 MS-DOS provided no means of concurrent processing. Discuss three
    major complications that concurrent processing adds to an operating
    system.
    
    Answer: A method of time sharing must be implemented to allow each of
    several processes to have access to the system. This method involves
    the preemption of processes that do not voluntarily give up the CPU
    (by using a system call, for instance) and the kernel being reentrant
    (so more than one process may be executing kernel code concurrently).
    Processes and system resources must have protections and must be
    protected from each other. Any given process must be limited in the
    amount of memory it can use and the operations it can perform on
    devices like disks.  Care must be taken in the kernel to prevent
    deadlocks between processes, so processes aren't waiting for each
    other's allocated resources.
    
    4.4 Describe the actions a kernel takes to context switch between
    processes.
    
    Answer: In general, the operating system must save the state of the
    currently running process and restore the state of the process
    scheduled to be run next. Saving the state of a process typically
    includes the values of all the CPU registers in addition to memory
    allocation. Context switches must also perform many
    architecture-specific operations, including flushing data and
    instruction caches.
    
    4.5 What are the benefits and detriments of each of the following?
    Consider both the systems and the programmers' levels.
    
    a. Symmetric and asymmetric communication
    b. Automatic and explicit buffering
    c. Send by copy and send by reference
    d. Fixed-sized and variable-sized messages
    
    Answer:
    
    a) Symmetric direct communication is a pain since both sides need the
       name of the other process.  This makes it hard to build a server.
    b) Automatic makes programming easier but is a harder system to build.
    c) Send by copy is better for network generalization and
       synchronization issues.  Send by reference is more efficient for big
       data structures but harder to code because of the shared memory
       implications.
    d) Variable sized makes programming easier but is a harder system to
       build.
    
    4.6 The correct producer짯consumer algorithm in Section 4.4 allows only
    n-1 buffers to be full at any one time. Modify the algorithm to allow
    all buffers to be utilized fully.
    
    Answer: See Chapter 7.1 page 190.
    
    5.3 What are two differences between user-level threads and
    kernel-level threads? Under what circumstances is one type better than
    the other?
    
    Answer: (1) User-level threads are unknown by the kernel, whereas the
    kernel is aware of kernel threads. (2) User threads are scheduled by
    the thread library and the kernel schedules kernel threads. (3) Kernel
    threads need not be associated with a process whereas every user
    thread belongs to a process.
    
    5.4 Describe the actions taken by a kernel to context switch between
    kernel-level threads.
    
    Answer: Context switching between kernel threads typically
    requires saving the value of the CPU registers from the thread being
    switched out and restoring the CPU registers of the new thread being
    scheduled.
    
    5.5 Describe the actions taken by a thread library to context switch
    between user-level threads.
    
    Answer: Context switching between user threads is quite similar to
    switching between kernel threads, although it is dependent on the
    threads library and how it maps user threads to kernel threads. In
    general, context switching between user threads involves taking a user
    thread of its LWP and replacing it with another thread. This act
    typically involves saving and restoring the state of the registers.
    
    5.7 Assume an operating system maps user-level threads to the kernel
    using the many-to-many model where the mapping is done through
    LWPs. Furthermore, the system allows the developers to create
    real-time threads. Is it necessary to bound a real-time thread to an
    LWP? Explain.
    
    Answer: You shouldn't have a mix of general threads and a real-time
    thread all bound to a single LWP.  The general threads may make a
    blocking system call causing the LWP to wait and possibly miss a time
    guarantee.  However, this can be avoided with the many-to-many model
    since there can be many LWP associated with the process.
    
    7.4
    
    Mutually exclusive
    ------------------
     
    If we are in the critical section of process k then flag[k]=true.
    Therefore if both process i and j are in their critical sections then
    flag[i]=true and flag[j]=true.  Without loss of generality, assume
    that j is the second process to enter its critical section.  The only
    way it can enter is if flag[i]=false.  This is a contradiction.
     
    Progress
    --------
     
    Assume both processes are trying to get into their critical regions.
    The only way a process can not get into its critical region is if it
    gets stuck at one of the while loops.  Without loss of generality,
    assume that turn=1.  In this case, if process 0 doesn't make any
    progress it must get stuck at the second while loop.  Also, based on
    the code, flag[0]=false.  Since turn=1 and flag[0]=false then process
    1 can not get stuck at a while loop and it must be allowed to enter
    its critical region.
     
    Bounded Waiting
    ---------------
     
    Technically, this doesn't satisfies the bounded waiting requirement.
    (At least based on the definition of bounded waiting our book gives.)
    Assume processes i wants into its critical region and that process j
    is let in instead.  When process j finishes its critical region, it
    will set turn=i.  This will allow process i to enter it's critical
    region after it sets flag[i]=true.  However until it sets
    flag[i]=true, process j can go through it's critical region an
    unbounded number of times.  Of course, since we know process i will
    eventually run and set flag[i]=true then this is probably OK.  
    
    7.9
    
    Shared variables
    
    int table[3]={0,0,0};
    Semaphore supply=0;
    Semaphore dealer=1;
    
    void S(int i, int j)
    {
      while (1) {
        wait(supply);
        if (table[i]==1 && table[j]==1) {
          table[i]=0;
          table[j]=0;
          smoke;
          signal(dealer);
        }
      }
    }
    
    void D()
    {
      while (1) {
        wait(dealer);
        random(&x,&y); /*Randomly picks two values in {0,1,2} for x and y*/
        table[x]=1;
        table[y]=1;
        signal(supply);
        signal(supply);
        signal(supply);
      }
    }
    
    6.3 Consider the following set of processes, with the length of the
        CPU-burst time given in milliseconds:
    
          Process          Burst Time              Priority
            P1                  10                    3
            P2                   1                    1
            P3                   2                    3
            P4                   1                    4
            P5                   5                    2
    
        The processes are assumed to have arrived in the order P1, P2, P3,
        P4, P5, all at time 0.
    
         a. Draw four Gantt charts illustrating the execution of these
             processes using FCFS, SJF, a nonpreemptive priority (a
             smaller priority number implies a higher priority), and RR
             (quantum = 1) scheduling.
    
         b. What is the turnaround time of each process for each of the
             scheduling algorithms in part a?
    
         c. What is the waiting time of each process for each of the
             scheduling algorithms in part a?
    
         d. Which of the schedules in part a results in the minimal
             average waiting time (over all processes)?
    
        Answer:
         a. The four Gantt charts are
      
         |       1       |2|3 |4|       5       |  FCS
    
         |1|2|3|4|5|1|3|5|1|5|1|5|1|5|     1    |  RR
    
         |2|4|3 |    5   |           1          |  SJF
    
         |2|    5    |           1         | 3|4|  Priority
    
         b. Turnaround time
    
               FCFS            RR          SJF            Priority
    P1          10             19           19                16
    P2          11              2            1                 1
    P3          13              7            4                18
    P4          14              4            2                19
    P5          19             14            9                 6
    
         c. Waiting time (turnaround time minus burst time)
    
    
                FCFS           RR           SJF          Priority
    P1            0             9            9               6
    P2           10             1            0               0
    P3           11             5            2              16
    P4           13             3            1              18
    P5           14             9            4               1
    
    d. Shortest Job First
    
    6.4 Suppose that the following processes arrive for execution at the
        times indicated. Each process will run the listed amount of
        time. In answering the questions, use nonpreemptive scheduling and
        base all decisions on the information you have at the time the
        decision must be made.
    
          Process            Arrival Time            Burst Time
            P1                   0.0                    8
            P2                   0.4                    4
            P3                   1.0                    1
    
          a. What is the average turnaround time for these processes with
             the FCFS scheduling algorithm?
    
          b. What is the average turnaround time for these processes with
             the SJF scheduling al- gorithm?
    
          c. The SJF algorithm is supposed to improve performance, but
             notice that we chose to run process P1 at time 0 because we
             did not know that two shorter processes would arrive
             soon. Compute what the average turnaround time will be if the
             CPU is left idle for the first 1 unit and then SJF scheduling
             is used. Remember that processes P1 and P2 are waiting during
             this idle time, so their waiting time may increase. This
             algorithm could be known as future-knowledge scheduling.
    
       Answer:
          a. 10.53
          b. 9.53
          c. 6.86
    
       Remember that turnaround time is finishing time minus arrival time,
       so you have to sub- tract the arrival times to compute the
       turnaround times. FCFS is 11 if you forget to subtract arrival
       time.
    
    6.8 Many CPU scheduling algorithms are parameterized. For example, the
        RR algorithm requires a parameter to indicate the time
        slice. Multilevel feedback queues require parameters to define
        the number of queues, the scheduling algorithms for each queue,
        the criteria used to move processes between queues, and so on.
        These algorithms are thus really sets of algorithms (for example,
        the set of RR algorithms for all time slices, and so on). One set
        of algorithms may include another (for example, the FCFS algorithm
        is the RR algorithm with an infinite time quantum). What (if any)
        relation holds between the following pairs of sets of algorithms?
    
          a. Priority and SJF
          b. Multilevel feedback queues and FCFS
          c. Priority and FCFS
          d. RR and SJF
       Answer:
          a. The shortest job has the highest priority.
          b. The lowest level of MLFQ is often FCFS.
          c. FCFS gives the highest priority to the job having been in
             existence the longest.
          d. If the quantum is small enough then the jobs finish in the
             same order.
    
          Note: Other answers are possible.
    
     6.9 Suppose that a scheduling algorithm (at the level of short-term
         CPU scheduling) favors those processes that have used the least
         processor time in the recent past. Why will this algorithm favor
         I/O-bound programs and yet not permanently starve CPU-bound
         programs?
    
         Answer: It will favor the I/O-bound programs because of the
         relatively short CPU burst request by them; however, the
         CPU-bound programs will not starve because the I/O-bound programs
         will relinquish the CPU relatively often to do their I/O.  Also
         the CPU-bound programs must get at least as much CPU time as the
         I/O bound programs.  If they do not then the CPU-bound programs
         use less processor time and the CPU bound processes become the
         favored processes.
    
    6.10 Explain the differences in the degree to which the following
         scheduling algorithms discriminate in favor of short processes:
           a. FCFS
           b. RR
           c. Multilevel feedback queues
    
         Answer:
           a. FCFS--discriminates against short jobs since any short jobs
              arriving after long jobs will have a longer waiting time.
           b. RR--treats all jobs equally (giving them equal bursts of CPU
              time) so short jobs will be able to leave the system faster
              since they will finish first.
           c. Multilevel feedback queues--work similar to the RR
              algorithm--they discriminate favorably toward short jobs.
    
           Note: Other answers are possible.
    
    7.1 What is the meaning of the term busy waiting? What other kinds of
        waiting are there in an operating system? Can busy waiting be
        avoided altogether? Explain your answer.
    
        Busy waiting is waiting without giving up the CPU.  A prefered
        type of waiting is to wait on a non-running queue.  The techniques
        in the book will still give some busy waiting on the critical
        sections of a semaphore.  It might be possible to avoid busy
        waiting by changing the critical section code of 7.2 and 7.3 so
        that it blocks processes.
    
    7.5
     
    do {
     
      /*Code Block A*/
      while (1) {
        flag[i]=want_in;
        j=turn;
        while (j!=i) {
          if (flag[j]!=idle)
            j=turn;
          else
            j=(j+1)%n;
        }
    1   flag[i]=in_cs;
        j=0;
        while ((j<n) && (j==i || flag[j]!=in_cs))
          j++;
    2   if ((j>=n) && (turn==i || flag[turn]==idle))
          break;
      }
    3 turn=i;
    
      critical section;
    
      /*Code Block B*/
      j=(turn+1)%n;
      while (flag[j]==idle)
        j=(j+1)%n;
      turn=j;
      flag[i]=idle;
    
    } while(1);
    
    Mutually exclusive
    ------------------
    
    Assume two processes execute statement (3) and are currently in their
    critical regions.  Both of these processes must have executed
    statement (1).  Let Pk be the second process of these two process to
    finish statement (1).  This process can't be in critical section since
    statement (2) must be false.  Contradiction.
    
    Progress
    --------
    
    At least one process where flag[]=want_in must get it's flag[]=in_cs.
    (This is the process that is as close as possible to turn.)  Consider
    the first time a process has flag[]=in_cs.  If none of these processes
    get into their critical region then they have flag[]=want_in.  Of all
    these processes, only the process closest to turn can get flag[]=in_cs
    a second time.  Call this process k.  Still, other process can have
    flag[]=in_cs if they are closer to turn than process k.  The same
    argument for applies to these processes.  If none of these processes
    get into their critical region then only the one closest to turn will
    get flag[]=in_cs on the next iteration through the loop.  Since there
    are a finite number of process, eventually only one process will have
    flag[]=in_cs.  This process must eventually be let into the critical
    region.
    
    Bounded Waiting
    ---------------
    
    Assume process k requests its critical section by setting
    flag[k]=want_in.  After this occurs, the next process, call it q, that
    finishes statement 3 sets turn=q and enters its critical section.  No
    other process can modify turn unless its just about to enter its
    critical section or its leaving its critical region.  When leaving the
    critical section a process must execute code block 2.  Code block 2
    sets turn equal to the closest non_idle process.  Based on the code,
    if a process is waiting, turn is the number of the next process that
    is allowed into its critical region.  Since flag[k]=want_in,
    eventually turn must be set to k.  This means that process k must wait
    for at most n-1 other processes to enter and leave their critical
    regions.
    
    
    7.12 The strict mutual exclusion within a monitor makes the bounded-buffer mon-
        itor of Exercise7.11 mainly suitable for small portions.
          a.Explain why this assertion is true.
          b.Design a new scheme that is suitable for larger portions.
    
        a) This question is somewhat vauge.  I guess what they are getting
           at is that for small buffers the buffer is often going to be
           full or empty, but a big buffer is most often going to be in an
           inbetween state.  Therefore the mutual exclusion will really
           slow down the two processes trying to access the buffer.
        b) Just use the old solution where the buffer is one too big and
           perhaps add some synchronization to block a process if the
           buffer is full or empty to prevent spinning.
    
    7.22 Explain the concept of transaction atomicity.
    
        This was not covered in the assigned readings.
    
    8.2 Is it possible to have a deadlock involving only one single
        process? Explain your answer.
    
        Answer: No. This follows directly from the hold-and-wait condition.
    
    8.6
    
    a) always safe
    b) not always safe, run bankers to see if it's OK.
    c) not always safe, run bankers to see if it's OK.
    d) always safe
    e) not always safe, run bankers to see if it's OK.
    f) always safe
    
    8.10 I don't understand the question.
    
    8.13
    
    a)
         A   B   C   D
    P0   0   0   0   0
    P1   0   7   5   0
    P2   1   0   0   2
    P3   0   0   2   0
    P4   0   6   4   2
     
    b) yes, (P0;P3;P2;P4;P1)
     
    c) yes since it will still be in a safe state, (P0;P2;P1;P3;P4)
    
    8.14 Consider the following resource-allocation policy. Requests and
         releases for resources are allowed at any time. If a request for
         resources cannot be satisfied because the resources are not
         available, then we check any processes that are blocked, waiting
         for resources. If they have the desired resources, then these
         resources are taken away from them and are given to the
         requesting process. The vector of resources for which the waiting
         process is waiting is increased to include the resources that
         were taken away.
    
              For example, consider a system with three resource types and
         the vector Available initialized to (4,2,2). If process P0 asks
         for (2,2,1), it gets them. If P1 asks for (1,0,1), it gets
         them. Then, if P0 asks for (0,0,1), it is blocked (resource not
         available). If P2 now asks for (2,0,0), it gets the available one
         (1,0,0) and one that was allocated to P0 (since P0 is blocked).
         P0's Allocation vector goes down to (1,2,1), and its Need vector
         goes up to (1,0,1).
    
           a. Can deadlock occur? If so, give an example. If not, which
               necessary condition cannot occur?
           b. Can indefinite blocking occur?
    
         Answer:
           a. Deadlock cannot occur because preemption exists.
           b. Yes. A process may never acquire all the resources it needs
               if they are continuously preempted by a series of requests
               such as those of process C.
    
    Additional problem
     
    Shared Memory
    -------------
    semaphore SA=1;
    semaphore SB=0;
     
    process A()
    {
      wait(SA);
      critical section
      signal(SB);
    }
     
    process B()
    {
      wait(SB);
      critical section
      signal(SA);
    }
    
     
    9.1 Name two differences between logical and physical addresses. Answer: Logical address is an address seen by the CPU while a physical address is seen by the memory. A physical address is limited to the amount of installed memory while a logical address is limited by the address size of the processor. 9.2 Explain the difference between internal and external fragmentation. Answer: Internal Fragmentation is the area in a region or a page that is not used by the job occupying that region or page. This space is unavailable for use by the system until that job is finished and the page or region is released. External fragmentation is the area or region that is not used because it is free. 9.5 Given memory partitions of 100K, 500K, 200K, 300K, and 600K (in order), how would each of the First-fit, Best-fit, and Worst-fit algorithms place processes of 212K, 417K, 112K, and 426K (in order)? Which algorithm makes the most efficient use of memory? Answer: a. First-fit: 212K is put in 500K partition 417K is put in 600K partition 112K is put in 288K partition (new partition 288K = 500K - 212K) 426K must wait b. Best-fit: 212K is put in 300K partition 417K is put in 500K partition 112K is put in 200K partition 426K is put in 600K partition c. Worst-fit: 212K is put in 600K partition 417K is put in 500K partition 112K is put in 388K partition 426K must wait In this example, Best-fit turns out to be the best. 9.7 Why are page sizes always powers of 2? Answer: Recall that paging is implemented by breaking up an address into a page and offset number. It is most efficient to break the address into X page bits and Y offset bits, rather than perform arithmetic on the address to calculate the page number and offset. Because each bit position represents a power of 2, splitting an address between bits results in a page size that is a power of 2. 9.8 Consider a logical address space of eight pages of 1024 words each, mapped onto a physi- cal memory of 32 frames. a. How many bits are there in the logical address? b. How many bits are there in the physical address? Answer: a. Logical address: 13 bits b. Physical address: 15 bits 9.10 Consider a paging system with the page table stored in memory. a. If a memory reference takes 200 nanoseconds, how long does a paged memory refer- ence take? b. If we add associative registers, and 75 percent of all page-table references are found in the associative registers, what is the effective memory reference time? (Assume that finding a page-table entry in the associative registers takes zero time, if the entry is there.) Answer: a. 400 nanoseconds; 200 nanoseconds to access the page table and 200 nanoseconds to access the word in memory. b. Effective access time = 0.75 (200 nanoseconds) + 0.25 (400 nanoseconds) = 250 nanoseconds. 10.3 A certain computer provides its users with a virtual-memory space of 2^32 bytes. The computer has 2^18 bytes of physical memory. The virtual memory is implemented by paging, and the page size is 4096 bytes. A user process generates the virtual address 11123456. Explain how the system establishes the corresponding physical location. Distinguish between software and hardware operations. Answer: The virtual address in binary form is 0001 0001 0001 0010 0011 0100 0101 0110 Since the page size is 2^12, the page table size is 2^20. Therefore the low-order 12 bits "0100 0101 0110" are used as the displacement into the page, while the remaining 20 bits "0001 0001 0001 0010 0011" are used as the displacement in the page table. 10.4 Which of the following programming techniques and structures are "good" for a demand- paged environment ? Which are "not good"? Explain your answers. a. Stack b. Hashed symbol table c. Sequential search d. Binary search e. Pure code f. Vector operations g. Indirection Answer: a. Stack--good. b. Hashed symbol table--not good. c. Sequential search--good. d. Binary search--not good. e. Pure code--good. f. Vector operations--good. g. Indirection--not good. 10.5 Assume we have a demand-paged memory. The page table is held in registers. It takes 8 milliseconds to service a page fault if an empty page is available or the replaced page is not modified, and 20 milliseconds if the replaced page is modified. Memory access time is 100 nanoseconds. Assume that the page to be replaced is modified 70 percent of the time. What is the maxi- mum acceptable page-fault rate for an effective access time of no more than 200 nanosec- onds? Answer: All work is done in micro seconds. .2 = .1 + P(.3(8000) + .7(20000) P = 6.1*10-6 10.8 An operating system supports a paged virtual memory, using a central processor with a cycle time of 1 microsecond. It costs an additional 1 microsecond to access a page other than the current one. Pages have 1000 words, and the paging device is a drum that rotates at 3000 revolutions per minute and transfers 1 million words per second. The following statistical measurements were obtained from the system: 1 percent of all instructions executed accessed a page other than the current page. Of the instructions that accessed another page, 80 percent accessed a page already in memory. When a new page was required, the replaced page was modified 50 percent of the time. Calculate the effective instruction time on this system, assuming that the system is run- ning one process only, and that the processor is idle during drum transfers. Answer: All work is done in micro seconds Access time = .99(1) + .01(.8(2) + .2(.5(11000) + .5(22000))) = 34 10.9 Consider a demand-paging system with the following time-measured utilizations: CPU utilization 20% Paging disk 97.7% Other I/O devices 5% Which (if any) of the following will (probably) improve CPU utilization? Explain your answer. a. Install a faster CPU. b. Install a bigger paging disk. c. Increase the degree of multiprogramming. d. Decrease the degree of multiprogramming. e. Install more main memory. f. Install a faster hard disk or multiple controllers with multiple hard disks. g. Add prepaging to the page fetch algorithms. h. Increase the page size. Answer: The system obviously is spending most of its time paging, indicating over allocation of memory. If the level of multiprogramming is reduced resident processes would page fault less frequently and the CPU utilization would improve. Another way to improve performance would be to get more physical memory or a faster paging drum. a. Get a faster CPU--No. b. Get a bigger paging drum--No. c. Increase the degree of multiprogramming--No. d. Decrease the degree of multiprogramming--Yes. e. Install more main memory--Likely to improve CPU utilization as more pages can remain resident and not require paging to or from the disks. f. Install a faster hard disk, or multiple controllers with multiple hard disks--Also an improvement, for as the disk bottleneck is removed by faster response and more throughput to the disks, the CPU will get more data more quickly. g. Add prepaging to the page fetch algorithms--Again, the CPU will get more data faster, so it will be more in use. This is only the case if the paging action is amenable to prefetching (i.e., some of the access is sequential). h. Increase the page size--Increasing the page size will result in fewer page faults if data is being accessed sequentially. If data access is more or less random, more paging action could ensue because fewer pages can be kept in memory and more data is transferred per page fault. So this change is as likely to decrease utilization as it is to increase it. 10.11 Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6. How many page faults would occur for the following replacement algorithms, assuming one, two, three, four, five, six, or seven frames? Remember all frames are initially empty, so your first unique pages will all cost one fault each. LRU replacement FIFO replacement Optimal replacement Answer: Number of frames LRU FIFO Optimal 1 20 20 20 2 18 18 15 3 15 16 11 4 10 14 8 5 8 10 7 6 7 10 7 7 7 7 7 10.17 Consider a demand-paging system with a paging disk that has an average access and transfer time of 20 milliseconds. Addresses are translated through a page table in main memory, with an access time of 1 microsecond per memory access. Thus, each memory reference through the page table takes two accesses. To improve this time, we have added an associative memory that reduces access time to one memory reference, if the page-table entry is in the associative memory. Assume that 80 percent of the accesses are in the associative memory and that, of the remaining, 10 percent (or 2 percent of the total) cause page faults. What is the effective memory access time? Answer: All work is done in micro seconds Access time = .8(1) + .2(.9(2) + .1(20002)) = 401.2 10.18 Consider a demand-paged computer system where the degree of multiprogramming is currently fixed at four. The system was recently measured to determine utilization of CPU and the paging disk. The results are one of the following alternatives. For each case, what is happening? Can the degree of multiprogramming be increased to increase the CPU utilization? Is the paging helping? a. CPU utilization 13 percent; disk utilization 97 percent b. CPU utilization 87 percent; disk utilization 3 percent c. CPU utilization 13 percent; disk utilization 3 percent Answer: a. Thrashing is occurring. b. CPU utilization is sufficiently high to leave things alone c. Increase the degree of multiprogramming.
     
     
    9.16 Consider the following segment table:
           Segment    Base    Length	    
                 0     219       600
    	     1    2300        14
    	     2      90       100
    	     3    1327       580
    	     4    1952        96
    
      What are the physical addresses for the following logical addresses?
      a. 0,430 b. 1,10 c. 2,500 d. 3,400 e. 4,112
    
      Answer:
    
      a. 219 + 430 = 649 b. 2300 + 10 = 2310 c. illegal reference, trap to
         operating system d. 1327 + 400 = 1727 e. illegal reference, trap
         to operating system
    
    13.2 Consider the following I/O scenarios on a single-user PC.
    
      a. A mouse used with a graphical user interface
      b. A tape drive on a multitasking operating system (assume no device
         preallocation is available)
      c. A disk drive containing user files
      d. A graphics card with direct bus connection, accessible through
         memory-mapped I/O
    
    For each of these I/O scenarios, would you design the operating system
    to use buffering, spooling, caching, or a combination? Would you use
    polled I/O, or interrupt-driven I/O?  Give reasons for your choices.
    
    Answer:
    
    a. A mouse used with a graphical user interface Buffering may be
    needed to record mouse movement during times when higher- priority
    operations are taking place. Spooling and caching are
    inappropriate. Inter- rupt driven I/O is most appropriate.
    
    b. A tape drive on a multitasking operating system (assume no device
    preallocation is available) Buffering may be needed to manage
    throughput difference between the tape drive and the source or
    destination of the I/O, Caching can be used to hold copies of data
    that resides on the tape, for faster access. Spooling could be used to
    stage data to the device when multiple users desire to read from or
    write to it. Interrupt driven I/O is likely to allow the best
    performance.
    
    c. A disk drive containing user files.  Buffering can be used to hold
    data while in transit from user space to the disk, and visa
    versa. Caching can be used to hold disk-resident data for improved
    perfor- mance. Spooling is not necessary because disks are
    shared-access devices. Interrupt- driven I/O is best for devices such
    as disks that transfer data at slow rates.
    
    d. A graphics card with direct bus connection, accessible through
    memory-mapped I/O Buffering may be needed to control multiple access
    and for performance (double- buffering can be used to hold the next
    screen image while displaying the current one). Caching and spooling
    are not necessary due to the fast and shared-access natures of the
    device. Polling and interrupts are only useful for input and for I/O
    completion detection, neither of which is needed for a memory-mapped
    device.
    
    13.4 Describe three circumstances under which blocking I/O should be
    used. Describe three circumstances under which nonblocking I/O should
    be used. Why not just implement nonblocking I/O and have processes
    busy-wait until their device is ready?
    
    Answer:
    
    Generally, blocking I/O is appropriate when the process will only be
    waiting for one spe- cific event. Examples include a disk, tape, or
    keyboard read by an application program.  Non-blocking I/O is useful
    when I/O may come from more than one source and the order of the I/O
    arrival is not predetermined. Examples include network daemons
    listening to more than one network socket, window managers that accept
    mouse movement as well as keyboard input, and I/O-management programs,
    such as a copy command that copies data between I/O devices. In the
    last case, the program could optimize its performance by buffering the
    input and output and using non-blocking I/O to keep both devices fully
    occupied.
    
    Non-blocking I/O is more complicated for programmers, because of the
    asynchonous rendezvous that is needed when an I/O occurs. Also, busy
    waiting is less efficient than interrupt-driven I/O so the overall
    system performance would decrease.
    
    13.5 Why might a system use interrupt-driven I/O to manage a single
    serial port, but polling I/O to manage a front-end processor, such as
    a terminal concentrator?
    
    Answer: Polling can be more efficient than interrupt-driven I/O. This
    is the case when the I/O is frequent and of short duration. Even
    though a single serial port will perform I/O relatively infrequently
    and should thus use interrupts, a collection of serial ports such as
    those in a terminal concentrator can produce a lot of short I/O
    operations, and interrupting for each one could create a heavy load on
    the system. A well-timed polling loop could alleviate that load
    without wasting many resources through looping with no I/O needed.
    
    13.6 Polling for an I/O completion can waste a large number of CPU
    cycles if the processor iterates a busy-waiting loop many times before
    the I/O completes. But if the I/O device is ready for service, polling
    can be much more efficient than is catching and dispatching an
    interrupt. Describe a hybrid strategy that combines polling, sleeping,
    and interrupts for I/O device service. For each of these three
    strategies (pure polling, pure interrupts, hybrid), describe a
    computing environment in which that strategy is more efficient than is
    either of the others.
    
    Answer: For a hybrid put a limit on the number of loops spent in a
    busy loop.  Let C be the amount of time it takes to catch and dispatch
    an interupt for handing I/O.  Set the limit so that the time busy
    waiting is C.  If the I/O is not ready after the loops are done put
    the process to sleep and wait for an interupt to signal that the I/O
    is done.
    
    Let 3/4 of the I/Os take C/2 time and 1/4 of the I/0s take 3C/2 time.
    * average processor time wasted for polling   = 3C/8 + 3C/8 = 6C/8
      average processor time wasted for interupts = C
      average processor time wasted for hybrid    = 3C/8 + C/2 = 7C/8
    
    Let all of the I/Os take 8C time.
      average processor time wasted for polling   = 8C
    * average processor time wasted for interupts = C
      average processor time wasted for hybrid    = 2C
    
    Let 3/4 of the I/Os take C/2 time and 1/4 of the I/0s take 8C time.
      average processor time wasted for polling   = 3C/8 + 2C = 19C/8
      average processor time wasted for interupts = C
    * average processor time wasted for hybrid    = 3C/8 + C/2 = 7C/8
    
    Virtual Memory (25 points)
    
    Consider the following paged segmentation scheme
     
           Segment Table
    Segment Number  Page Table
           0            2
           1            0
           2            1
    
                 Page Table 1
    Virtual Page  Frame#  Valid  Read Only?
         0          3       N       N
         1         22       N       N
         2         73       Y       N
         3         74       Y       N
         4         85       Y       N
         5         29       Y       N
         6         63       Y       N
         7         93       Y       N
         8         83       Y       N
         9         15       Y       N
        10         27       Y       N
        11         34       Y       N
    
                 Page Table 0
    Virtual Page  Frame#  Valid  Read Only?
         0         10       Y       N
         1         17       N       N
         2         89       Y       N
         3         90       Y       N
         4         29       Y       N
         5         47       N       N
         6         55       Y       N
         7         32       Y       N
         8         36       Y       N
         9          9       Y       N
    
                 Page Table 2
    Virtual Page  Frame#  Valid  Read Only?
         0         33       Y       Y
         1         46       Y       Y
         2         54       N       Y
         3          6       Y       Y
         4         99       N       Y
         5         67       Y       Y
         6         21       Y       Y
    
    
         "Valid" is set to Y if the page is in memory 
         "Read Only" is set to Y if the page is not writable 
    
    Given the following parameters:
    
    A page size of 1000 words using base 10 integer addresses (not
    realistic, but it makes the math easier).  Thus, the first page of
    virtual memory runs from virtual address 0 through virtual address
    999. (All accesses to memory are for 4-byte words).
    
    The maximum number of page table entries is 100 (0 to 99)
    
    Each process is separated into at most 3 segments (1 segment each for
    code, heap allocated data, and the stack).
    
    If the page is invalid (Valid = N), assume the page fault causes the
    OS to load the faulting page into the given frame number; for example,
    an access to virtual page 2 in page table 2 causes a page fault where
    the OS loads virtual page 2 into frame 54.
    
    (a) (10 points) How many integer digits do we need for the segment
    number, page number, and offset?
    
    (b) What physical addresses do the following virtual addresses access?
    (Assume any missing digits are leading zeroes).  Indicate if the
    access generates an error (invalid segment, invalid page, or
    protection violation) or a page fault.
    
    i.  (3 points) Read from virtual address 211333
    ii. (3 points) Write to virtual address 5345
    iii.(3 points) Read from virtual address 1810627
    iv. (3 points) Read from virtual address 104806
    v.  (3 points) Read from virtual address 200097
    
    Answers:
    
    a) segment 1 digit, page 2 digits, offset 3 digits.
    b) i.  34333
      ii.  67345 error readonly
      iii. error segment out of range
      iv.  29806
      v.   3097 page fault
    
     
    11.1 Consider a file system where a file can be deleted and its disk
         space reclaimed while links to that file still exist. What
         problems may occur if a new file is created in the same storage
         area or with the same absolute path name? How can these problems
         be avoided?
    
         Answer: Let F1 be the old file and F2 be the new file. A user
         wishing to access F1 through an existing link will actually
         access F2. Note that the access protection for file F1 is used
         rather than the one associated with F2.  This problem can be
         avoided by insuring that all links to a deleted file are deleted
         also.  This can be accomplished in several ways:
           a. maintain a list of all links to a file, removing each of
              them when the file is deleted
           b. retain the links, removing them when an attempt is made to
              access a deleted file
           c. maintain a file reference list (or counter), deleting the
              file only after all links or refer- ences to that file have
              been deleted.
    
    11.3 Why do some systems keep track of the type of a file, while
         others leave it to the user or simply do not implement multiple
         file types? Which system is "better?"
    
         Answer: Some systems allow different file operations based on the
         type of the file (for instance, an ascii file can be read as a
         stream while a database file can be read via an index to a
         block). Other systems leave such interpretation of a file's data
         to the process and provide no help in accessing the data. The
         method which is "better" depends on the needs of the processes on
         the system, and the demands the users place on the operating
         system. If a system runs mostly database applications, it may be
         more efficient for the operating system to implement a
         database-type file and provide operations, rather than making
         each program implement the same thing (possibly in different
         ways). For general purpose systems it may be better to only
         implement basic file types to keep the operating system size
         smaller and allow maximum freedom to the processes on the system.
    
    11.5 What are the advantages and disadvantages of recording the name
         of the creating pro- gram with the file's attributes (as is done
         in the Macintosh Operating System)?
    
         Answer: By recording the name of the creating program, the
         operating system is able to implement features (such as automatic
         program invocation when the file is accessed) based on this
         information. It does add overhead in the operating system and
         require space in the file descriptor, however.
    
    11.8 Some systems automatically open a file when it is referenced for
         the first time, and close the file when the job
         terminates. Discuss the advantages and disadvantages of this
         scheme as compared to the more traditional one, where the user
         has to open and close the file explicitly.
    
         Answer: Automatic opening and closing of files relieves the user
         from the invocation of these functions, and thus makes it more
         convenient to the user; however, it requires more overhead than
         the case where explicit opening and closing is required.
    
    11.12 Consider a system that supports 5000 users. Suppose that you
          want to allow 4990 of these users to be able to access one file.
            a. How would you specify this protection scheme in UNIX?
            b. Could you suggest another protection scheme that can be
               used more effectively for this purpose than the scheme
               provided by UNIX?
    
          Answer:
            a. There are two methods for achieving this:
               i. Create an access control list with the names of all 4990
                  users.
              ii. Put these 4990 users in one group and set the group
                  access accordingly. This scheme cannot always be
                  implemented since user groups are restricted by the
                  system.
            b. The universe access information applies to all users unless
               their name appears in the access-control list with
               different access permission. With this scheme you simply
               put the names of the remaining ten users in the access
               control list but with no access privileges allowed.
    
    12.1 Consider a file currently consisting of 100 blocks. Assume that
    the file control block (and the index block, in the case of indexed
    allocation) is already in memory. Calculate how many disk I/O
    operations are required for contiguous, linked, and indexed
    (single-level) allocation strategies, if, for one block, the following
    conditions hold. In the contiguous- allocation case, assume that there
    is no room to grow in the beginning, but there is room to grow in the
    end. Assume that the block information to be added is stored in
    memory.
            a. The block is added at the beginning.
            b. The block is added in the middle.
            c. The block is added at the end.
            d. The block is removed from the beginning.
            e. The block is removed from the middle.
            f. The block is removed from the end.
          
    Answer:
                     Contiguous      Linked   Indexed
                a.       201            1        1
                b.       101            52       1
                c.         1            3        1
                d.       198            1        0
                e.        98            52       0
                f.         0           100       0
    
    
    12.2 Consider a system where free space is kept in a free-space list.
    
    a. Suppose that the pointer to the free-space list is lost. Can the
    system reconstruct the free-space list? Explain your answer.
    
    b. Suggest a scheme to ensure that the pointer is never lost as a
    result of memory failure.
    
    Answer:
    
    a. In order to reconstruct the free list, it would be necessary to
    perform "garbage collection." This would entail searching the entire
    directory structure to determine which pages are already allocated to
    jobs. Those remaining unallocated pages could be relinked as the
    free-space list.
    
    b. The free-space list pointer could be stored on the disk, perhaps in
    several places.
    
    
    12.6 Consider a file system on a disk that has both logical and
    physical block sizes of 512 bytes.  Assume that the information about
    each file is already in memory. For each of the three allocation
    strategies (contiguous, linked, and indexed), answer these questions:
    
    a. How is the logical-to-physical address mapping accomplished in this
    system? (For the indexed allocation, assume that a file is always less
    than 512 blocks long.)
    
    b. If we are currently at logical block 10 (the last block accessed
    was block 10) and want to access logical block 4, how many physical
    blocks must be read from the disk?
    
    Answer: Let Z be the starting file address (block number).
           
    a. Contiguous. Divide the logical address by 512 with X and Y the
    resulting quotient and remainder respectively.
    
       i. Add X to Z to obtain the physical block number. Y is the
          displacement into that block.
    
       ii. 1
    
    b. Linked. Divide the logical physical address by 511 with X
    and Y the resulting quotient and remainder respectively.
    
    
        i. Chase down the linked list (getting X + 1 blocks). Y + 1 is the
           displacement into the last physical block.
       ii. 4
    
    c. Indexed. Divide the logical address by 512 with X and Y the
       resulting quotient and remainder respectively.
    
        i. Get the index block into memory. Physical block address is
           contained in the index block at location X. Y is the
           displacement into the desired physical block.
    
       ii. 2
    
    12.7 One problem with contiguous allocation is that the user must
         preallocate enough space for each file. If the file grows to be
         larger than the space allocated for it, special actions must be
         taken. One solution to this problem is to define a file structure
         consisting of an initial contiguous area (of a specified
         size). If this area is filled, the operating system automatically
         defines an overflow area that is linked to the initial contiguous
         area. If the overflow area is filled, another overflow area is
         allocated. Compare this implementation of a file with the
         standard contiguous and linked implementations.
    
         Answer: This method requires more overhead then the standard
         contiguous allocation.  It requires less overhead than the
         standard linked allocation.
    
    
    12.14 Consider the following backup scheme:
              Day 1. Copy to a backup medium all files from the disk.
              Day 2. Copy to another medium all files changed since day 1.
              Day 3. Copy to another medium all files changed since day 1.
    
         This contrasts to the schedule given in Section 11.6.2 by having
         all subsequent backups copy all files modified since the first
         full backup. What are the benefits of this system over the one in
         Section 11.6.2? What are the drawbacks? Are restore operations
         made easier or more difficult? Explain your answer.
    
         Answer: Restores are easier because you can go to the last backup
         tape, rather than the full tape. No intermediate tapes need be
         read. More tape is used as more files change.
    
    
    14.1 None of the disk-scheduling disciplines, except FCFS, is truly
       fair (starvation may occur).
             a. Explain why this assertion is true.
             b. Describe a way to modify algorithms such as SCAN to ensure fairness.
             c. Explain why fairness is an important goal in a time-sharing system.
             d. Give three or more examples of circumstances in which it
                is important that the op- erating system be unfair in
                serving I/O requests.
    
    Answer:
       a. New requests for the track over which the head currently resides
          can theoretically arrive as quickly as these requests are being
          serviced.
       b. All requests older than some predetermined age could be "forced"
          to the top of the queue, and an associated bit for each could be
          set to indicate that no new request could be moved ahead of
          these requests. For SSTF, the rest of the queue would have to be
          reorganized with respect to the last of these "old" requests.
       c. To prevent unusually long response times.
       d. Paging and swapping should take priority over user requests. It
          may be desirable for other kernel-initiated I/O, such as the
          writing of file system metadata, to take precedence over user
          I/O. If the kernel supports real-time process priorities, the
          I/O requests of those processes should be favored.
    
    14.2 Suppose that a disk drive has 5000 cylinders, numbered 0 to
         4999. The drive is currently serving a request at cylinder 143,
         and the previous request was at cylinder 125. The queue of
         pending requests, in FIFO order, is
    
           86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130
    
        Starting from the current head position, what is the total
        distance (in cylinders) that the disk arm moves to satisfy all the
        pending requests, for each of the following disk- scheduling
        algorithms?
           a. FCFS
           b. SSTF
           c. SCAN
           d. LOOK
           e. C-SCAN
           f. C-LOOK
    
        Answer: 
           a. The FCFS schedule is 143, 86, 1470, 913, 1774, 948, 1509,
              1022, 1750, 130. The total seek distance is 7081.
           b. The SSTF schedule is 143, 130, 86, 913, 948, 1022, 1470,
              1509, 1750, 1774. The total seek distance is 1745.
           c. The SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750,
              1774, 4999, 130, 86. The total seek distance is 9769.
           d. The LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750,
              1774, 130, 86. The total seek distance is 3319.
           e. The C-SCAN schedule is 143, 913, 948, 1022, 1470, 1509,
              1750, 1774, 4999, 0, 86, 130. The total seek distance is
              9985.
           f. The C-LOOK schedule is 143, 913, 948, 1022, 1470, 1509,
              1750, 1774, 86, 130.  The total seek distance is 3363.
    
    14.10 Requests are not usually uniformly distributed. For example, a
          cylinder containing the file system FAT or inodes can be
          expected to be accessed more frequently than a cylinder that
          only contains files. Suppose you know that 50 percent of the
          requests are for a small, fixed number of cylinders.
    
             a. Would any of the scheduling algorithms discussed in this
                chapter be particularly good for this case? Explain your
                answer.
    
             b. Propose a disk-scheduling algorithm that gives even better
                performance by taking advantage of this "hot spot" on the
                disk.
    
             c. File systems typically find data blocks via an indirection
                table, such as a FAT in DOS or inodes in UNIX. Describe
                one or more ways to take advantage of this indirection to
                improve the disk performance.
    
          Answer:
             a. SSTF would take greatest advantage of the situation. FCFS
                could cause unnecessary head movement if references to the
                "high-demand" cylinders were interspersed with references
                to cylinders far away.
             b. Here are some ideas. Place the hot data near the middle of
                the disk. Modify SSTF to prevent starvation. Add the
                policy that if the disk becomes idle for more than, say,
                50 ms, the operating system generates an anticipatory seek
                to the hot region, since the next request is more likely
                to be there.
             c. Cache the metadata in primary memory, and locate a file's
                data and metadata in close physical proximity on the
                disk. (UNIX accomplishes the latter goal by allocat- ing
                data and metadata in regions called cylinder groups.)
    
    
    14.11 Why is rotational latency usually not considered in disk
          scheduling? How would you modify SSTF, SCAN, and C-SCAN to
          include latency optimization?
    
          Answer: Most disks do not export their rotational position
          information to the host.  Even if they did, the time for this
          information to reach the scheduler would be subject to
          imprecision and the time consumed by the scheduler is variable,
          so the rotational position information would become
          incorrect. Further, the disk requests are usually given in terms
          of logical block numbers, and the mapping between logical blocks
          and physical locations is very complex.
    
    
    14.14 What are the tradeoffs involved in rereading code pages from the
          file system versus using swap space to store them?
    
          Answer: If code pages are stored in swap space, they can be
          transferred more quickly to main memory (because swap space
          allocation is tuned for faster performance than general file
          system allocation). Using swap space can require startup time if
          the pages are copied there at process invocation rather than
          just being paged out to swap space on demand. Also, more swap
          space must be allocated if it is used for both code and data
          pages.
    
Designed by Tistory.