Processes states: a CPU (Central Processing Unit)

Processes
in a system cycle between two states: a CPU (Central Processing Unit) burst, in
which calculations are carried out, and an I/O (input/output) burst, in which
data is either sent to or received by the system. The Central Processing Unit (CPU)
attempts to maximise the efficiency and ‘fairness’ of a CPU, by allowing a
process to utilise the CPU, whilst another waits for an I/O. If this isn’t
addressed, CPU cycles are lost whilst the CPU waits for a single process’s I/O.

The role of the CPU scheduler is to find the subsequent process for an inactive
CPU, from a queue of immediately available processes. The order in which the processes
are handled depends on the algorithm implemented by the CPU scheduler.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

There
are two types of algorithms applied: non-preemptive and preemptive. Nonpreemptive
algorithms must be applied applied when a new process is required, i.e. when
either a process is terminated, or when a process transitions from a running to
waiting state. However, when a process goes from either a running state to
ready state, or from waiting state to ready state, preemptive algorithms are
optimum. In order to ensure that preemptive algorithms can be implemented, it
must be ensured that the hardware can support time interrupts. Once the scheduling
algorithm has selected a process, the dispatcher hands control of the CPU to
it. This involves a context switch, a user mode switch and then a jump to the
correct location in the newly loaded program. The time taken for this to complete is called the dispatch
latency.

When
attempting to choosing the optimum algorithm to implement in CPU scheduling,
there is criteria to adhere by in order to make the best decision: CPU
utilisation should be maximised, in order not waste a single CPU cycle wherever
possible, and the ‘throughput’ (number of processes executed per unit time) should
also be maximised. However, the turnaround time, waiting time and response time
should all be minimised as much as possible. http://www.studytonight.com/operating-system/cpu-scheduling

For
the following evaluation of algorithms, a single CPU burst is assumed per
process. First-Come First Served (FCFS) is the most primitive form of
scheduling, involving a first-in, first-out basis for the queue order, and is a
non-preemptive algorithm. The benefits of using FCFS is that it is simple in
application and allows the process being executed to complete its CPU burst.

Therefore, there is no need to context switch and take CPU away from process preemptively.

However, as well as having long waiting times, FCFS can also slow the system
down in a process known as the convoy effect; preventing the CPU from being optimally
used as after the large process finished, the CPU goes through periods of high intensity
processing (with a minimal I/O) and vice versa 3.

Round
Robin (RR) shares similarities with FCFS, however, the CPU bursts are given
limits called time quantum. If the process completes before the time quantum
limit runs out, the schedule continues in a first in, first out manner.

However, if the quantum timer limit expires, the time interrupt occurs and the process
is placed at the back of the queue (the algorithm is preempetive). This is
ideal for small tasks, as even if a large process is placed higher in the queue
than a short one, each get a ‘fair’ attempt. However, this proves problematic
with processes of equal time, where FCFS could have a lower throughput and the
overhead produced through ‘context switching’ reduces the CPU utilisation 3.

Shortest-Job-First
Scheduling (SJF) arranges the queue in ascending order of length of CPU burst
time.