Best代写-最专业靠谱代写IT | CS | 留学生作业 | 编程代写Java | Python |C/C++ | PHP | Matlab | Assignment Project Homework代写

操作系统OS代写 | CS350 Undergraduate Operating Systems Assignment

操作系统OS代写 | CS350 Undergraduate Operating Systems Assignment


CS350 Undergraduate Operating Systems Spring 2021

Assigned: March 24

Due: April 4, 11:59PM EDT

1. Protection and Interrupt-Handling:
(a) Consideradual-modesysteminwhichthekernelisseparatedfromuser-levelapplicationsusing

two “rings of protection”.

  • (4 points) Is it safe for the kernel to directly access user-level memory in any process address space? Briefly explain your answer.
  • (6 points) Is it safe for the kernel to directly execute user-level code in any process address space? Again, briefly explain your answer.
  • (8 points) Systems such as UNIX allow processes to associate signal handlers with various events. Unlike interrupts, signals are not handled until the target process is active. Briefly explain one CPU, memory and I/O protection problem if we allowed a signal handler for process p1 to execute in the context of another process, p2. If we could address these protection problems, what would be the advantage of this scheme?

(b) (8 points) Consider the implementation of a micro-kernel on an architecture such as the Intel x86. Suppose this OS structure allows device drivers to be implemented at user-level. On the x86 architecture a general protection fault occurs if we attempt to trap to a ring of protection less privileged than the current protection domain. Briefly explain how we can invoke user-level device drivers from the kernel (by essentially performing the opposite of a system call). Show pseudo-code or a diagram, if necessary, to help with your explanation.

2. Virtualization: For this question you will need to study extra sources of information, such as techni- cal papers and web pages. Please cite any sources of information with your answers.

(a) (10 points) Before the introduction of hardware virtualization support, x86 architecture was not considered virtualizable in the “trap and emulate” sense. This was because of approximately 17 sensitive, unprivileged instructions. Briefly describe what is meant by a “sensitive instruction” in this case and give an example of how it is potentially problematic when constructing a virtual machine (VM). What is meant by the term “trap and emulate”? How then can the x86 support VMs when not all instructions are virtualizable?

(b) (10 points) Consider that a guest OS is mapped to a process address space Pguest. Both appli- cations and a guest kernel exist in Pguest. Each guest application is associated with a “virtual process” that behaves like a separate thread in Pguest. Likewise, the guest kernel is similar to a library but system calls from a guest application to this kernel cannot behave like direct function calls. They still need to operate like traditional system calls in a dual-mode system (e.g., using a mechanismsimilartoint 0x80orsysenterinLinuxonthex86).

Explain how one could virtualize system calls issued by a guest application within Pguest, so they are serviced by kernel code of the guest OS. How do you differentiate and control the switching between application and kernel stacks in Pguest? In your answer, include a diagram that shows how control is redirected between various parts of memory, to handle virtualized system calls. HINT: Early versions of User-Mode Linux used a “tracing thread” technique similar to this. You can assume this is a form of OS rather than machine virtualization, and there is no special hardware support for virtualization.

3. Process/Thread Scheduling: Consider the following set of processes, with all times in milliseconds.

Process Arrival Time Burst/Computation Time Priority (1=highest) Deadline

P1 3 2 1 7 P2 0 2 3 5 P3 2 3 4 15 P4 6 4 2 11 P5 1 5 5 20

  1. (a)  (20points)InthespacebelowdrawtheGanttchartsillustratingtheexecutionoftheseprocesses using the following scheduling algorithms: (1) Static Priority with preemption, (2) Shortest Job First, no preemption, (3) Round-Robin, no priority, quantum = 1, and (4) Earliest Deadline First, with preemption.
  2. (b)  (8points)Whataretheaveragewaitingandaverageturnaroundtimesforeachoftheprocesses, using the scheduling algorithms above?


(c) (5 points) Consider a scheduling algorithm with O(n) time complexity to schedule n processes (i.e., the scheduling latency is linearly-proportional to the number of processes ready for ex- ecution). Will the average scheduling latency be more or less for CPU-bound processes than I/O-bound processes? Briefly explain your answer.

(d) Rate Monotonic Scheduling (RMS) is a well-known, preemptive scheduling algorithm for real- time, periodic processes.

• (6 points) If the following three periodic processes all require scheduling for the first time, at time t, can a feasible schedule be constructed with RMS? Briefly explain your answer by showing the timeline sequence or proving schedulability using the completion time test. What happens if we use earliest deadline first scheduling, assuming deadlines are at the ends of request periods? Again, prove your answer either mathematically or by example.

Process Computation Time, C Request Period, T

P1 1 4 P2 6 20 P3 4 10


• (3 points) Can RMS yield a feasible schedule for the following periodic processes? Briefly explain your answer.

Process Computation Time, C Request Period, T

P1 2 4 P2 1 8 P3 6 16

• (6 points) One a single CPU, what is the least upper bound on utilization by a set of pro- cesses that yields a feasible schedule with RMS, if all processes have harmonically-related request periods? Prove your answer. NOTE: harmonically-related request periods implies each period is a multiple of all smaller periods.


(e) (10points)Consideraproportionalsharescheduler(e.g.,EEVDF)forasetofntaskscompeting for a single CPU. If each task has a time quantum, q, what is the worst-case lag between ideal and actual usage of the CPU experienced by any one task? Prove your answer. NOTE: You can assume all tasks have equal weights (and, hence, priorities).


4. Threads (15 points) For this question you will have to study pthreads. The following piece of POSIX code implements a consumer, in a producer-consumer problem, using a shared circular buffer of N slots. Briefly explain what is wrong with the consumer and how it should be fixed:

   typedef struct {
     char *data; // Slot data.
     size_t size; // Size of data.
     pthread_t id; // ID of destination thread.

} slot_t;

   typedef struct {
     slot_t slot[N];
     int count; // Number of full slots in buffer.
     int in_marker; // Next slot to add data to.
     int out_marker; // Next slot to take data from.
     pthread_mutex_t mutex; // Mutex for shared buffer.
     pthread_cond_t occupied_slot; // Condition when >= 1 full buffer slot.
     pthread_cond_t empty_slot;    // Condition when 1 empty buffer slot.

} buffer_t;

   char *consume (buffer_t *b) {
     char *item;
     pthread_t self=pthread_self();
     pthread_mutex_lock (&b->mutex); // Don’t worry about checking return values
     if (b->count <= 0)
       pthread_cond_wait (&b->occupied_slot, &b->mutex);
     assert (b->count > 0);
     // Check id of target thread to see message is for this thread.
     if (b->slot[b->out_marker].id != self) {
       // Data is not for this thread.
       pthread_mutex_unlock (&b->mutex);
       return NULL;
     item = (char *) malloc (b->slot[b->out_marker].size);
     memcpy (item, b->slot[b->out_marker].data, b->slot[b->out_marker].size);
     free (b->slot[b->out_marker].data);
     b->out_marker %= BUFFER_SIZE;
     pthread_mutex_unlock (&b->mutex);
     pthread_cond_signal (&b->empty_slot);
     return (item);