About Visopsys – Multitasker



Visopsys is a multitasking operating system kernel.  In lay terms this means that the kernel’s “scheduler” — a small piece of independent code inside the kernel — parcels out small packets of time to all of the running programs in very rapid succession.   One such running program is the operating system kernel itself.  Since a single microprocessor core can (generally) only do one thing at a time, the illusion of multiple processes working simultaneously is achieved through this rapid switching between tasks.  This is called “task switching” or “context switching”.  If the multitasker is doing its job properly, and the work load is not too great, the user should never perceive that these switches are even occurring.

What follows is a more technical description of the method by which Visopsys’ scheduler performs these context switches.

Visopsys’ scheduler combines a number of common ideas into a cohesive algorithm.   Its major features include:

  • Pre-emptive
  • Arbitrary number of priority queues
  • Fair scheduling algorithm, except for the highest- and lowest- priority queues
  • Real- time scheduling in the highest- priority queue
  • Background scheduling in the lowest- priority queue

Following is a description of the algorithm by which Visopsys’ scheduler determines task execution order.

There are two “special” queues in the multitasker. The first (highest- priority) queue is the “real time” queue.  When there are any processes running and ready at this priority level, they will be serviced to the exclusion of all processes from other queues. Not even the kernel process resides in this queue.

The last (lowest- priority) queue is the “background” queue. Processes in this queue will only receive processor time when there are no ready processes in any other queue. Unlike all of the “normal” or “middle” queues, it will be entirely possible for processes in this background queue be starved of processor time.

Because of the existence of these two special- case queues, there must be a minimum of 3 priority queues in the Visopsys multitasker.

The number of priority queues are flexibly based on a configuration macro in the multitasker’s header file, and other than the minor restriction outlined above, is arbitrary.  Increasing the number of priority queues introduces no extra overhead into the kernel (i.e. there really aren’t separate queues).   However, regardless of the number of queues, the “special” queues mentioned above will always exhibit their distinct behaviors.

Thus, using these multiple priority queues, the Administrator can exercise a very fine-grained control over the performance of the various processes in the system.

Amongst all of the processes in the “normal” priority queues, there is a fair approach to scheduling. This fair algorithm utilizes a weighting scheme.  When the scheduler gains control of the processor, each waiting task’s weight is calculated.  In the general case, the task with the highest weight “wins” and is granted the next time slice.

Among the variables which contribute to a process’ weight will be the following: priority, waiting time, whether it yielded the previous timeslice, and “shortness”.   Shortness will be implemented later (shortest job first), so for now we will concentrate on priority and waiting time. The formula will look like this:

weight = (task priority * priority ratio) + waiting time

[ 0 is the highest possible priority value.   In the calculation above, “task priority” is actually the inverse of the task’s real priority value.  It is calculated as: the lowest possible priority value minus the task’s priority value.  So, for example, if the range of possible priority values was 0 (highest) through 7 (lowest), a highest- priority task would be: 7 – 0 = 7. ]

The task’s priority is multiplied by the “priority ratio”.  The priority ratio determines the importance of priority vs. waiting time in the scheduler queues.  A priority ratio of zero, for example, would give higher- priority processes no advantage over lower- priority ones, and waiting time alone would determine execution order.  By contrast, a very high ratio ensures that lower- priority tasks must wait a very long time before usurping the time slice of a higher- priority task.

To this value will be added the current waiting time.  The waiting time of each task in the queue starts at zero.  Each time a task is passed over for execution by the scheduler, its waiting time value is increased by one.  Whenever a task is selected to run by the scheduler, its waiting time value is subsequently reset to zero.

After performing this simple calculation for each waiting task, the scheduler can select the “winner” by running the task with the highest weight.

For example, if we have 4 possible priority levels, the priority ratio is set to 3, and we have two tasks waiting as follows:

Task #1: priority=0, waiting time=7
Task #2: priority=2, waiting time=12


task1Weight = ((4 – 0) * 3) + 7  = 19     <- winner
task2Weight = ((4 – 2) * 3) + 12 = 18

Thus, even though task 2 has been waiting considerably longer, task 1’s higher priority wins.  However in a slightly different scenario — using the same constants — if we had:

Task 1: priority=0, waiting time=7
Task 2: priority=2, waiting time=14


task1Weight = ((4 – 0) * 3) + 7  = 19
task2Weight = ((4 – 2) * 3) + 14 = 20
    <- winner

In this case, task 2 gets to run since it has been waiting long enough to overcome task 1’s higher priority. This possibility helps to ensure that no processes will starve.

A tie between the highest-weighted tasks is resolved by round- robin queue order.

A process which yielded the previous time slice, is arbitrarily given a weight of zero, so as to prevent a group of idle processes from driving up CPU usage.

Visual Operating System