multitasking

task_t

Processes and their internal state are represented by task_t. task_t stores all relevant information regarding a process. This includes:

task_t is defined as follows

typedef struct task {
	char* name; //user-printable process name
	int id;  //PID
	int queue; //scheduler ring this task is slotted in

	task_state state; //current process state
    uint32_t wake_timestamp; //used if process is in PIT_WAIT state

	uint32_t begin_date;
	uint32_t end_date;

	uint32_t relinquish_date;
	uint32_t lifespan;
	struct task* next;

	uint32_t esp; //stack pointer
	uint32_t ebp; //base pointer
	uint32_t eip; //instruction pointer

	page_directory_t* page_dir; //paging directory for this process


	/*
	 * the below only exist for non-kernel tasks
	 * (such as loaded ELFs)
	 */

	//end of .bss section of current task
	uint32_t prog_break; 
	//virtual address of .bss segment
	uint32_t bss_loc;

	/* array of child tasks this process has spawned
	 * each time a process fork()'s,
	 * the new child is added to this array
	 * when wait() is used, it uses the tasks in this array
	 */
	array_m* child_tasks;
	//parent process that spawned this one
	struct task* parent;

	//exit status of zombie task
	//this field is undefined until task finishes executing
	int exit_code;

	//TODO move this near task_state and make clean
	//optional context provided with blocking reason
	//up to user what this means
	void* block_context;

	//file descriptor table
	//this stores all types of file descriptors, 
	//including stdin/out/err, open files, and pipes
	fd_entry fd_table[FD_MAX];

	//pseudo-terminal stream
	//this field provides implementation for
	//stdin/stdout/stderr
	//(all of these map to the same backing stream)
	struct std_stream* std_stream;

	//virtual memory 'slide'
	//offset in virtual memory where this program is to be placed
	uint32_t vmem_slide;

	//bitmap of privileged actions this task can perform
	uint32_t permissions;

	//array of xserv windows this task has spawned
	//this is so we know where to send stdio to
	array_m* windows;
} task_t;

scheduling

axle provides several scheduler algorithms. The scheduler algorithm is set at boot time, and can be set to one of the following

typedef enum mlfq_option {
    LOW_LATENCY = 0, //minimize latency between tasks running, tasks share a single queue
    PRIORITIZE_INTERACTIVE, //use more queues, allowing interactive tasks to dominate
} mlfq_option;

Scheduler policy is set through tasking_install(), like so:

    tasking_install(LOW_LATENCY);

When the LOW_LATENCY policy is selected, axle's scheduler will use a round-robin policy, where each active task will run in an ordered list.

When the 'PRIORITIZE_INTERACTIVE' policy is selected, axle's scheduler will use an multi-level feedback queue (MLFQ) policy. In this setup, tasks are shuffled through priority queues depending on their observed behavior.

MLFQ policy

In an MLFQ policy, many queues are used by the scheduler, each with an increasing priority from the last. These queues are used to store runnable processes.

If any tasks exist in higher priority queues, they will be scheduled before tasks in lower priority queues. New tasks all start off in the highest priority queue. Processes may voluntarily relinquish the CPU if waiting on a blocking action; e.g. a process may block waiting on a keystroke from the user. When a task blocks before it's used its entire scheduler quantum, it is allowed to remain at its current priority. Whenever a task consumes CPU for its entire scheduler quantum, it is demoted to a lower priority queue. Tasks in lower queues have a longer scheduler quantum than highly interactive tasks.

This setup enables an MLFQ policy to allow highly interactive tasks, such as those interfacing directly with the user, to be granted a higher priority than a CPU-bound process that simply needs raw processor time. Those processes in lower priority queues are allowed to run for a longer time than higher priority tasks. MLFQ attempts to adjust task priority based on the task's observed behavior.

To prevent process starvation, in which a process would never be run because higher-priority tasks were dominating all CPU time, axle periodically 'boosts' all tasks to the highest priority queue. This gives processes the opportunity who were previously CPU-heavy to regain interactivity. The period between scheduler boosts is controlled by the BOOSTER_PERIOD constant. Other scheduler constants are as follows:

//name [default value] //description
MAX_TASKS [128] //maximum task_t's that can be active at once
MLFQ_DEFAULT_QUEUE_COUNT [16] //number of priority queues to use for scheduler
MLFQ_MAX_QUEUE_LENGTH [16] //size of each priority queue
HIGH_PRIO_QUANTUM [5] //time, in ms, that the highest priority tasks should run for
BOOSTER_PERIOD [1000] //period, in ms, that every task is boosted to the highest priority queue
MAX_RESPONDERS [32] //maximum number of tasks that can request to recieve user IO events