作業系統 作業#3 Operating System Homework #3

學生:Murphy Chen

系級:電機四

老師:陳少傑

 

Examination of Operating Systems

 

1. Who is the boss of the following activities?

Choose one best-fit answer from: process management, processor management, primary storage management, secondary storage management, or file management to fill

 

(a) File management supports swap-space management.

(b) Process management supports remote procedure call.

(c) Processor management decides which process is to be dispatched.

(d) Secondary storage management supports page-replacement.

(e) Primary storage management supports space allocation methods.

 

2. Tell Differences between A and B:

 

(a) Mutual exclusion and Synchronization.

 

Mutual exclusion is the ability to exclude all other processes from a course of action while one process is granted that ability.

 

Synchronization is the ability to prevent a process from continuing until some condition is met. Synchronization is a generalization of mutual exclusion.

 

(b) Job scheduling and CPU scheduling.

 

All the jobs that enter the system are kept in the job pool. This pool consists of all processes residing on mass storage awaiting allocation of main memory. If several jobs are ready to be brought into memory, and there is not enough room for all of them, then the system must choose among them. This decision is job scheduling.

 

When the operating system selects a job from the job pool, it loads it into memory for execution. When several jobs in memory are ready to run at the same time, the system must choose among them. This decision is CPU scheduling.

 

(c) Deadlock prevention and Deadlock avoidance.

 

Deadlock prevention is a set of methods for ensuring that at least one of the four necessary conditions for a deadlock to occur cannot hold. These methods prevent deadlocks by constraining how requests for resources can be made.

Deadlock avoidance requires that the operating system be given in advance additional information concerning which resources a process will request and use during its lifetime. With this additional knowledge, we can decide for each request whether or not the process should wait.

 

3. The following procedures are used to define two concurrent processes, p1 and p2. The order in which the code is to be executed is important. For example, p2s read(x) should be executed after p1s write(x).

Proc-p1()

{

while (TRUE)

{

1.1 <compute A1>;

1.2 write(x);

1.3 <compute A2>;

1.4 read(y);

}

}

Proc-p2()

{

while (TRUE)

{

2.1 read(x);

2.2 <compute B1>;

2.3 write(y);

2.4 <compute B2>;

}

}

 

(a) Try to partition the processes into subprocesses: p1.1, p1.2, ..., p2.1, ..., p2.4, and use a precedence graph to represent the order of these subprocesses.

 

Subprocess

Instruction

p1.1

<compute A1>;

p1.2

write(x);

p1.3

<compute A2>;

p1.4

read(y);

p2.1

read(x);

p2.2

<compute B1>;

p2.3

write(y);

p2.4

<compute B2>;

 

The precedence graph is shown as follows:

 

Image4

(b) Try to use P, V, and necessary semaphores to implement the above computation.

 

 

var filex,filey:semaphore; // filex,filey are initialized to 0.

 

 

proc-p1()

{

while(TRUE)

{

<compute A1>;

write(x);

signal(filex);

<compute A2>;

wait(filey);

read(y);

}

}

 

 

 

proc-p2()

{

while(TRUE)

{

wait(filex);

read(x);

<compute B1>;

write(y);

signal(filey);

<compute B2>;

}

}

 

 

 

 

4.(a) Rewrite the following program using parbegin / parend statements. Make sure that it exploits maximum parallelism but always produces the same correct result as the sequential execution. (Hint: you should draw the precedence graph first.)

 

1) y := input();

2) a := y * y;

3) r := y * 2;

4) s := y - 1;

5) b := r * s;

6) c := a - b;

7) w := c + 1;

 

The precedence graph is as follows:

 

Image9

 

The resulting program is as follows:

 

(Refer to the usage of cobegin and coend in the book of The logical design of operating systems by Lubomir Bic, and Alan C. Shaw)

 

y:=input();

parbegin

parbegin r:=y*2 | s:=y-1 parend;

b:=r*s;

|

a:=y*y;

parend;

c:=a-b;

w:=c+1;

 

 

(b) Try to use fork, join, and quit to implement the above computation. (Hint: The meanings of fork, join and quit are as follows: A current process p executing fork x causes a new process q to start at the label x; join t,y has the effect: t := t - 1; if t=0 then goto y; quit if a process p executes the instruction quit, p will be terminated.)

 

The resulting program is as follows:

 

y:=input();

n:=2;

fork p2;

m:=2;

fork p4;

r:=y*2; join m,p5; quit;

p4: s:=y-1; join m,p5; quit;

p5: b:=r*s; join n,p6; quit;

p2: a:=y*y; join n,p6; quit;

p6: c:=a-b;

w:=c+1;

 

5. Five jobs A through E arrive at a computer center having estimated run times of 10, 6, 2, 4, and 8 minutes. Their priorities are 3, 5, 2, 1, and 4, respectively(with 1 being the highest priority). For each the following scheduling algorithms, plot the Gantt chart and determine the mean process turnaround time.

 

(a) Priority scheduling.

 

Gantt chart is ploted as follows:

 

D

C

A

E

B

0 4 6 16 24 30

The mean process turnaround time = ( 4 + 6 + 16 + 24 + 30 ) / 5 = 16 minutes/process

 

(b) Round robin.

 

Assume that time quantum is 1 minute.

The Gantt chart is ploted as follows:

 

A

B

C

D

E

A

B

C

D

E

A

B

D

E

A

B

D

E

A

B

E

A

B

E

A

E

A

E

A

A

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

The mean process turnaround time = ( 30 + 23 + 8 + 17 + 28 ) / 5 = 21.2 minutes/process

 

(c) SJF.

 

The Gantt chart is ploted as follows:

C

D

B

E

A

0 2 6 12 20 30

The mean process turnaround time = ( 2 + 6 + 12 + 20 + 30 ) / 5 = 14 minutes/process

 

6. Assume a program with eight virtual pages, numbered from 0 to 7. The pages are referenced in the order of 0123012301234567. Show how many page faults will happen (by pointing out where cause these page faults) if you use a three page frames and the following strategies:

 

(a) Least Recently Used(LRU).

 

0

1

2

3

0

1

2

3

0

1

2

3

4

5

6

7

á

á

á

á

á

á

á

á

á

á

á

á

á

á

á

á

There are 16 page faults.

 

(b) Least Frequently Used(LFU).

 

0

1

2

3

0

1

2

3

0

1

2

3

4

5

6

7

á

á

á

á

á

á

á

á

á

á

á

á

á

á

á

á

There are 16 page faults.

 

(c) First-In First-Out(FIFO).

 

0

1

2

3

0

1

2

3

0

1

2

3

4

5

6

7

á

á

á

á

á

á

á

á

á

á

á

á

á

á

á

á

There are 16 page faults.

 

7. Given three processes A, B, and C; three resouces X, Y, and Z; and the following events: (1) A requests X, (2) A requests Y, (3) B requests Y, (4) B requests Z, (5) C requests Z, (6) C requests X. Assume that the requested resource should always be allocated to the request process if it is available.

 

Draw the resource allocation graph for the following sequences respectively, and tell whether it is a deadlock? If it is, how to recover?

 

(a) Events occur in the sequence of 1, 3, 5, 2, 6, and 4.

 

The resource allocation graph is shown as follows:

Image6

Because each resource type has only one instance,and there is a cycle is the resource allocation graph, we can see that a deadlock has occured.

 

There are two methods to recover from a deadlock. One is process termination, the other is resource preemption. For example, we choose the former method, we can choose one process to be terminated. If we choose process B to be terminated, then resouce Y can now be allocated to process A, and process A can successfully finish its job and finally release both resource X and Y, so that we further allocate resource X to process C, and now process C can successfully finish its job.

 

(b) Events occur in the sequence of 1, 5, 2, 6, 3, and 4.

 

The resource allocation graph is shown as follows:

 

Image7

Because there are no cycles in the resource allocation graph, we can see that no process in the system is deadlocked.

 

 

 

 

Chapter 8

 

Exercise 8.5

& Given memory partitions of 100K, 500K, 200K, 300K, and 600K (in order), how would each of the First-fit, Best-fit, and Worst-fit algorithms place processes of 212K, 417K, 112K, and 426K (in order)? Which algorithm makes the most efficient use of memory?

 

!

For the First-fit algorithm:

 

Image8

 

The request of memory allocation for 426K will be rejected, because any single partition in the graph is less than 426K.

 

 

For the Best-fit algorithm:

 

Image9

For the Worst-fit algorithm:

 

Image10

 

The request of memory allocation for 426K will be rejected, because any single partition in the graph is less than 426K.

 

Finally, we can see that the Best-fit algorithm can fulfill all four requests for memory allocation, so its the most efficient one!

 

Exercise 8.16

& Consider the following segment table:

Segment

Base

Length

0

219

600

1

2300

14

2

90

100

3

1327

580

4

1952

96

What are the physical addresses for the following logical addresses?

a. 0,430 / b. 1,10 / c. 2,500 / d. 3,400 / e. 4,112

 

!

a. The physical address is 219+430=649

b. The physical address is 2300+10=2310

c. The offest in the logical address is larger than the length of the segment corresponding to that logical address (500>100), so, this access to memory is illegal. An error has occured.

d. The physical address is 1327+400=1727

e. The offest in the logical address is larger than the length of the segment corresponding to that logical address (112>96), so, this access to memory is illegal. An error has occured.

Exercise 8.17

& Consider the Intel address translation scheme shown in Figure 8.28.

a. Describe all the steps that are taken by the Intel 80386 in translating a logical address into a physical address.

b. What are the advantages to the operating system of hardware that provides such complicated memory translation hardware?

c. Are there any disadvantages to this address translation system?

 

!

a.

The logical address is a pair (selector, offset), where the selector is a 16-bit number:

 

s

g

p

13 bits

1 bit

2 bits

 

in which s designates the segment number, g indicates whether the segment is in the GDT or LDT, and p deals with protection. The offset is a 32-bit number specifying the location of the byte (word) within the segment in question.

 

The physical address on the 386 is 32 bits long and is formed as follows. The select register points to the appropriate entry in the LDT or GDT. The base and limit information about the segment in question are used to generate a linear address. First, the limit is used to check for address validity. If the address is not valid, a memory fault is generated, resulting in a trap to the operating system. If it is valid, then the value of the offset is added to the value of the base, resulting in a 32-bit linear address. This address is then translated into a physical address.

 

The linear address is divided into a page number consisting of 20 bits, and a page offset consisting of 12 bits. Since we page the page table, the page number is further divided into a 10-bit page directory pointer and a 10-bit page table pointer. The logical address is as follows:

 

page number

page offset

p1

p2

d

10 bits

10 bits

12 bits

 

p1 points to the appropriate entry in the page directory. Together with page directory base register, we can find the base address of the page table. p2 points to the appropriate entry in the page table. Together with the base address of the page table, we can find the corresponding page frame in physical memory. Adding this to the page offset, finally we can find the physical address in question.

 

b.

At first, the memory is segmented, with the following advantages:

 

1. It lends itself to protection. Because the segments represent a semantically defined portion of the program, it is likely that all entries in the segment will be used the same way. Thus simplifies the task of protection.

 

2. It lends itself to sharing. Each process has a segment table associated with its process control block, which the dispatcher uses to define the hardware segment table when this process is given the CPU. Segments are shared when entries in the segment tables of two different processes point to the same physical locations.

 

Then, each segment is paged, with the following advantages:

 

1. A segment may be broken up into a number of pieces and these pieces need not be contiguously located in main memory during execution.

 

2. With two-level paging, the size of the page table is smaller.

 

3. Theres no external fragmentation.

 

c.

Disadvantages to this address translation system:

 

1. Each access to memory will require four references to the memory (three references to locate the appropriate entry in the discriptor table, the page directory, the page table, respectively, and one to access to the content in the resulting physical address). So, the time required to map a logical address to a physical address increases. However, a set of associative registers can reduce the performance degradation to an acceptable level.

 

2. With paging, there is the problem of internal fragmentation.

 

 

 

 

Chapter 9

 

Exercise 9.9

& Consider a demand-paging system with the following time-measured utilizations:

 

CPU utilization

20%

Paging disk

97.7%

Other I/O devices

5%

 

Which (if any) of the following will (probably) improve CPU utilization? Explain your answer.

 

a. Install a faster CPU.

b. Install a bigger paging disk.

c. Increase the degree of multiprogramming.

d. Decrease the degree of multiprogramming.

e. Install more main memory.

f. Install a faster hard disk, or multiple controllers with multiple hard disks.

g. Add prepaging to the page fetch algorithms.

h. Increase the page size.

 

!

 

a. Because the CPU utilization is low (20%) and the utilization of the paging disk is very high (97%), we can see that the system is lack of free memory (too many processes are entered in the system at the same time), so merely install a faster CPU will not improve CPU utilization too much.

 

b. Because the system is lack of free memory, merely install a bigger paging disk may not be of much help, either.

 

c. Increase the degree of multiprogramming will further degrade the performance, because more processes have to be swaped in and out from memory now, the system will spend too much time in swapping than in calculation.

 

d. Decrease the degree of multiprogramming will be helpful, because it will decrease the frequency of swapping, leaving the system more time doing calculations.

 

e. Install more main memory will be helpful, because when there is more free memory, the frequency of swapping will decrease, leaving the system more time doing calculations.

 

f. Install a faster hard disk, or multiple controllers with multiple hard disks may be helpful, since now the system spends less time in swapping, it has more time doing calculations.

 

g. Add prepaging to the page fetch algorithms may be helpful or not. It depends on whether the cost of prepaging is less than the cost of servicing the corresponding page faults.

 

h. Increase the page size will degrade the performance, because the internal fragmentation will be worse, the utilization of main memory will be low, more processes will not be able to fit into main memory, and the system will have to spend more time in swapping.

 

 

 

Exercise 9.11

& Consider the following page reference string:

 

1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6.

 

How many page faults would occur for the following replacement algorithms, assuming one, two, three, four, five, six, or seven frames? Remember all frames are initially empty, so your first unique pages will all cost one fault each.

 

l LRU replacement

 

l FIFO replacement

 

l Optimal replacement

 

!

 

 

1 frame

2 frames

3 frames

4 frames

5 frames

6 frames

7 frames

LRU

20

18

15

10

8

7

7

FIFO

20

18

16

14

10

10

7

Optimal

20

15

11

8

7

7

7

 

 

Exercise 9.20

& What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?

 

!

 

1. The cause of thrashing:

 

In the steady state, pratically all the main memory will be occupied with process pieces, so that the processor and operating system will have direct access to as many processes as possible. Thus, when the operating system brings one piece in, it must throw another out. If it throws out a piece just before it is about to be used, then it will just have to go get that piece again almost immediately. Too much of this leads to a condition known as thrashing: The processor spends most of its time swapping pieces rather than executing user instructions.

 

2. How to detect thrashing:

 

We can measure the CPU utilization and the degree of multiprogramming at some interval, and see that whether while increasing the degree of multiprogramming introduces, the CPU utilization is also increased. If it is the case, theres no thrashing. Otherwise, there is thrashing.

 

3. Having detected thrashing, what can the system do to eliminate this problem:

 

The system can throw some of the processes out from main memory, and take them from the ready queue into the job queue for a while. Leave more room for other processes to run. This will eliminate the need for other processes to repeatedly and frequently swap in and swap out, thus will eliminate the problem of thrashing.