作業系統 作業#1 Operating System Homework #1

學生:Murphy Chen

系級:電機四

老師:陳少傑

 

Chapter 3

Exercise 3.11 Why is the separation of mechanism and policy a desirable property?

 

! Mechanisms determine how to do something, and policies decide what will be done. Policy decisions are important for all resource allocation and scheduling problems. Whenever it is necessary to decide whether or not to allocate a resource, a policy decision must be made. Whenever the question is “how” rather than “what”, it is a mechanism that must be determined.

 

The separation of mechanism and policy is important for flexibility. Policies are likely to change from place to place or time to time. In the worst case, each change in policy would require a change in the underlying mechanism.

 

So, a general mechanism would be more desirable. If the mechanism are properly separated and are policy independent, a change in policy would then require redefinition of only certain parameters of the system, and need not change the underlying mechanism, which is a more time-consuming job.

 

For example, microkernel-based operating systems take the separation of mechanism and policy to extreme, by implementing a basic set of primitive building blocks. These blocks are almost policy-free, allowing more advanced mechanisms and policies to be added via user-created kernel modules, or user programs themselves.

Chapter 4

Exercise 4.6 Describe the actions taken by a kernel to context switch

a. Among threads.

b. Among processes.

 

! a. For kernel threads, they have a small data structure and a stack. Switching between kernel threads requires only save the data structure and stack of the old thread, and load the saved data structure and stack for the new thread, and does not require changing memory access information. A kernel may first save all the CPU registers ( including program counter, stack pointer, etc. ) and the data structure of the old thread in the address space of the task containing the old thread, and decide which thread will run next, then load all the CPU registers and the data structure of the new thread, and continue to execute the new thread.

 

For user-level threads, hey contain a stack and a program counter, no kernel resources are required. The kernel is not involved in scheduling these user-level threads. Switching among them need only change the program counter and the stack pointer.

 

! b. Switching between processes requires saving the state of the old process and loading the saved state for the new process. A kernel may first save all the CPU registers in the address space of the old process, and put the process control block of the old process into the ready queue, and decide which process in the ready queue will execute next, then change the address space to the new process, and load all the CPU registers in the address space of the new process, and continue to execute the new process.

Exercise 4.8 The correct producer-consumer algorithm presented in Section 4.4 allows only n-1 buffers to be full at any time. Modify the algorithm to allow all the buffers to be utilized fully.

 

! Method 1:

 

The producer and consumer processes share the following variables:

 

var n;

type item=...;

var buffer: array[0..n-1] of item;

in, out: 0..n-1;

isfull: BOOL;

 

with in, out initialized to 0, and isfull initialized to false. The variable in points to the next free position in the buffer; out points to the first full position in the buffer.

 

The producer process has a local variable nextp, in which the new item to be produced is stored:

 

repeat

...

produce an item in nextp

...

while ((in= out) and isfull) do no-op;

isfull:= true;

buffer[in]:= nextp;

in:= in+1 mod n;

until false;

 

The consumer process has a local variable nextc, in which the item to be consumed is stored:

 

repeat

while ((in= out) and (not isfull)) do no-op;

isfull:= false;

nextc:= buffer[out];

out:= out+1 mod n;

...

consume the item in nextc

...

until false;

 

! Method 2:

 

The producer and consumer processes share the following variables:

 

var n;

type item=...;

var buffer: array[0..n-1] of item;

in, out: 0..n-1;

count: 0..n-1;

 

with in, out, and count initialized to 0. The variable in points to the next free position in the buffer; out points to the first full position in the buffer.

 

 

The producer process has a local variable nextp, in which the new item to be produced is stored:

 

repeat

...

produce an item in nextp

...

while count= n do no-op;

count:= count +1;

buffer[in]:= nextp;

in:= in+1 mod n;

until false;

 

The consumer process has a local variable nextc, in which the item to be consumed is stored:

 

repeat

while count=0 do no-op;

count:= count -1;

nextc:= buffer[out];

out:= out+1 mod n;

...

consume the item in nextc

...

until false;

 

 

 

 

 

Exercise 4.9 Consider the interprocess-communication scheme where mailboxes are used.

 

& a. Suppose a process P wants to wait for two messages, one from mailbox A and one from mailbox B. What sequence of send and receive should it execute?

 

 

! Either

 

receive(A, messageA);

receive(B, messageB);

or

receive(B, messageB);

receive(A, messageA);

 

will be okay.

 

& b. What sequence of send and receive should P execute if P wants to wait for one message either from mailbox A or from mailbox B (or from both)?

 

! Construct a mailbox set C, including mailbox A and B, then

 

receive(C, message);

 

 

& c. A receive operation makes a process wait until the mailbox is nonempty. Either devise a scheme that allows a process to wait until a mailbox is empty, or explain why such a scheme cannot exist.

 

! Make a system call port_status to return the number of messages in a given mailbox, and then a process can wait until a mailbox is empty by executing the following code:

 

while (port_status(my_mailbox)> 0) do no-op;

 

 

 

 

Chapter 5

Exercise 5.3 Consider the following set of processes, with the length of the CPU-burst time given in milliseconds:

 

Process

Burst Time

Priority

P1

10

3

P2

1

1

P3

2

3

P4

1

4

P5

5

2

 

The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.

 

& a. Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, a nonpreemptive priority ( a smaller priority number implies a higher priority ), and RR (quantum = 1) scheduling.

 

!

For FCFS (First-Come, First-Served) scheduling:

 

P1

P2

P3

P4

P5

0 10 11 13 14 19

 

For SJF (Shortest-Job-First) scheduling:

 

P2

P4

P3

P5

P1

0 1 2 4 9 19

 

For nonpreemptive priority scheduling:

 

P2

P5

P1

P3

P4

0 1 6 16 18 19

 

For RR (Round-Robin) scheduling:

 

P1

P2

P3

P4

P5

P1

P3

P5

P1

P5

P1

P5

P1

P5

P1

P1

P1

P1

P1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

 

 

& b. What is the turnaround time of each process for each of the scheduling algorithms in part a?

 

!

For FCFS scheduling:

 

Process

Turnaround Time

P1

10

P2

11

P3

13

P4

14

P5

19

 

For SJF scheduling:

 

Process

Turnaround Time

P1

19

P2

1

P3

4

P4

2

P5

9

 

For nonpreemptive priority scheduling:

 

Process

Turnaround Time

P1

16

P2

1

P3

18

P4

19

P5

6

 

For RR scheduling:

 

Process

Turnaround Time

P1

19

P2

2

P3

7

P4

4

P5

14

 

& c. What is the waiting time of each process for each of the scheduling algorithms in part a?

 

!

For FCFS scheduling:

Process

Waiting Time

P1

0

P2

10

P3

11

P4

13

P5

14

 

For SJF scheduling:

Process

Waiting Time

P1

9

P2

0

P3

2

P4

1

P5

4

 

For nonpreemptive priority scheduling:

Process

Waiting Time

P1

6

P2

0

P3

16

P4

18

P5

1

 

For RR scheduling:

Process

Waiting Time

P1

9

P2

1

P3

5

P4

3

P5

9

 

 

& d. Which of the schedules in part a results in the minimal average waiting time (over all processes)?

 

!

Scheduling Policy

Average Waiting Time

FCFS

(0+10+11+13+14)/5=9.6

SJF

(9+0+2+1+4)/5=3.2

nonpreemptive priority

(6+0+16+18+1)/5=8.2

RR

(9+1+5+3+9)/5=5.4

 

The SJF scheduling results in the minimal average waiting time.

Exercise 5.10 Explain the differences in the degree to which the following scheduling algorithms discriminate in favor of short processes:

a. FCFS

b. RR

c. Multilevel feedback queues

 

!

 

Scheduling Algorithm

Degree Discriminate In Favor of Short Porcesses

FCFS

Low

RR

Medium

Multilevel feedback queue

High

 

For FCFS scheduling, the process which comes first will execute first, no matter it is a long process or a short process. Therefore, short processes may have to wait for a long process to finish its job before they can execute. This is the so-called convoy effect. So, FCFS scheduling favors short processes least.

 

For RR scheduling, the process which comes first will execute first, but after a certain time (time slice), it will get off CPU, and the next process can start to execute. So, short processes need not wait until the whold big process finish its job, and RR scheduling favor short process more than FCFS does.

 

For Multilevel feedback queue, we may assign different scheduling algorithm for each queue. Generally, we assign a queue with least time slice the highest priority, and another queue with a little more time slice less priority, etc. Short process tend to finish their job in the queue with highest priority. When a long process does not finish its job in the queue with highest priorty ( and also the least time slice ), it is moved to the second queue with less priority and a little more time slice. Because long processes tend to be in queues with lower priority than short processes, we can say this kind of multilevel feedback queue favors short processes most.