It also ensures that the highest priority thread wll always run. Linux provides 2 scheduling algos. Solaris 2 uses priority-based process scheduling SlideShare uses cookies to improve functionality and performance, and to provide you with relevant advertising.
If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy. See our Privacy Policy and User Agreement for details. Create your free account to read unlimited documents. The SlideShare family just got bigger. Home Explore Login Signup. Successfully reported this slideshow. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads.
You can change your ad preferences anytime. Next SlideShares. Download Now Download Download to read offline. Window scheduling algorithm May. Binal Parekh Follow. Teaching Assistant at charusat University. The Windows Scheduler. Windows process scheduling presentation. CPU scheduling algorithms in OS. Grds conferences icst and icbelsh 5. Win8 architecture for developers. Cryptography make easy. Advanced cryptography and implementation. Related Books Free with a 30 day trial from Scribd.
Related Audiobooks Free with a 30 day trial from Scribd. The quantum value isn't reset when a thread enters a wait state in fact, as explained earlier, when the wait is satisfied, the thread's quantum value is decremented by 1 quantum unit, equivalent to one-third of a clock interval except for threads running at priority 14 or higher, which have their quantum reset after a wait to a full turn.
In this scheduling scenario, a lower-priority thread is preempted when a higher-priority thread becomes ready to run. This situation might occur for a couple of reasons:. A higher-priority thread's wait completes. The event that the other thread was waiting for has occurred. In either of these cases, Windows must determine whether the currently running thread should still continue to run or whether it should be preempted to allow a higher-priority thread to run.
Threads running in user mode can preempt threads running in kernel mode the mode in which the thread is running doesn't matter. The thread priority is the determining factor. When a thread is preempted, it is put at the head of the ready queue for the priority it was running at. Figure illustrates this situation. Preemptive thread scheduling. In Figure , a thread with priority 18 emerges from a wait state and repossesses the CPU, causing the thread that had been running at priority 16 to be bumped to the head of the ready queue.
Notice that the bumped thread isn't going to the end of the queue but to the beginning; when the preempting thread has finished running, the bumped thread can complete its quantum. When the running thread exhausts its CPU quantum, Windows must determine whether the thread's priority should be decremented and then whether another thread should be scheduled on the processor.
If the thread priority is reduced, Windows looks for a more appropriate thread to schedule. For example, a more appropriate thread would be a thread in a ready queue with a higher priority than the new priority for the currently running thread.
If the thread priority isn't reduced and there are other threads in the ready queue at the same priority level, Windows selects the next thread in the ready queue at that same priority level and moves the previously running thread to the tail of that queue giving it a new quantum value and changing its state from running to ready.
This case is illustrated in Figure If no other thread of the same priority is ready to run, the thread gets to run for another quantum.
Quantum end thread scheduling. When a thread finishes running either because it returned from its main routine, called Exit-Thread , or was killed with TerminateThread , it moves from the running state to the terminated state. If there are no handles open on the thread object, the thread is removed from the process thread list and the associated data structures are deallocated and released.
A thread's context and the procedure for context switching vary depending on the processor's architecture. A typical context switch requires saving and reloading the following data:.
The kernel saves this information from the old thread by pushing it onto the current old thread's kernel-mode stack, updating the stack pointer, and saving the stack pointer in the old thread's KTHREAD block.
The kernel stack pointer is then set to the new thread's kernel stack, and the new thread's context is loaded. If the new thread is in a different process, it loads the address of its page table directory into a special processor register so that its address space is available. See the description of address translation in Chapter 7. Otherwise, control passes to the new thread's restored instruction pointer and the new thread resumes execution. Various Windows process viewer utilities report the idle process using different names.
In reality, however, the idle threads don't have a priority level because they run only when there are no real threads to run. Remember, only one thread per Windows system is actually running at priority 0 the zero page thread, explained in Chapter 7. Although some details of the flow vary between architectures, the basic flow of control of the idle thread is as follows:. Enables and disables interrupts allowing any pending interrupts to be delivered.
Checks whether any DPCs described in Chapter 3 are pending on the processor. If DPCs are pending, clears the pending software interrupt and delivers them. Checks whether a thread has been selected to run next on the processor, and if so, dispatches that thread. Calls the HAL processor idle routine in case any power management functions need to be performed. In Windows Server , the idle thread also scans for threads waiting to run on other processors. This is explained in the upcoming multiprocessor scheduling section.
The intent of these adjustments is to improve overall system throughput and responsiveness as well as resolve potentially unfair scheduling scenarios. Like any scheduling algorithms, however, these adjustments aren't perfect, and they might not benefit all applications. Windows never boosts the priority of threads in the real-time range 16 through Therefore, scheduling is always predictable with respect to other threads in the real-time range. Windows assumes that if you're using the real-time thread priorities, you know what you're doing.
These values are listed in Table The boost is always applied to a thread's base priority, not its current priority. As illustrated in Figure , after the boost is applied, the thread gets to run for one quantum at the elevated priority level. After the thread has completed its quantum, it decays one priority level and then runs another quantum. This cycle continues until the thread's priority level has decayed back to its base priority.
A thread with a higher priority can still preempt the boosted thread, but the interrupted thread gets to finish its time slice at the boosted priority level before it decays to the next lower priority. Priority boosting and decay. As noted earlier, these boosts apply only to threads in the dynamic priority range 0 through No matter how large the boost is, the thread will never be boosted beyond level 15 into the real-time priority range. In other words, a priority 14 thread that receives a boost of 5 will go up to priority A priority 15 thread that receives a boost will remain at priority When a thread that was waiting for an executive event or a semaphore object has its wait satisfied because of a call to the function SetEvent , PulseEvent , or ReleaseSemaphore , it receives a boost of 1.
This adjustment helps balance the scales. The thread gets to run at the elevated priority for its remaining quantum as described earlier, quantums are reduced by 1 when threads exit a wait before decaying one priority level at a time until it reaches its original base priority.
A special boost is applied to threads that are awoken as a result of setting an event with the special functions NtSetEventBoostPriority used in Ntdll. If a thread waiting for an event is woken up as a result of the special event boost function and its priority is 13 or below, it will have its priority boosted to be the setting thread's priority plus one. If its quantum is less than 4 quantum units, it is set to 4 quantum units. This boost is removed at quantum end. Whenever a thread in the foreground process completes a wait operation on a kernel object, the kernel function KiUnwaitThread boosts its current not base priority by the current value of PsPrioritySeparation.
The windowing system is responsible for determining which process is considered to be in the foreground. As described in the section on quantum controls, PsPrioritySeparation reflects the quantum-table index used to select quantums for the threads of foreground applications.
However, in this case, it is being used as a priority boost value. The reason for this boost is to improve the responsiveness of interactive applications by giving the foreground application a small boost when it completes a wait, it has a better chance of running right away, especially when other processes at the same base priority might be running in the background.
Unlike other types of boosting, this boost applies to all Windows systems, and you can't disable this boost, even if you've disabled priority boosting using the Windows SetThreadPriorityBoost function. Take the following steps:. Select the Applications option. This causes PsPrioritySeparation to get a value of 2. This older version of the Performance tool is needed for this experiment because it can query performance counter values at a frequency faster than the Windows Performance tool which has a maximum interval of once per second.
Select the Thread object, and then select the Priority Current counter. In the Instance box, scroll down the list until you see the cpustres process. Select the second thread thread 1. The first thread is the GUI thread. You should see something like this:. Select Chart from the Options menu. Change the Vertical Maximum to 16 and the Interval to 0. Now bring the Cpustres process to the foreground. You should see the priority of the Cpustres thread being boosted by 2 and then decaying back to the base priority as follows:.
The reason Cpustres receives a boost of 2 periodically is because the thread you're monitoring is sleeping about 75 percent of the time and then waking up the boost is applied when the thread wakes up. To see the thread get boosted more frequently, increase the Activity level from Low to Medium to Busy. If you set the Activity level to Maximum, you won't see any boosts because Maximum in Cpustres puts the thread into an infinite loop. Therefore, the thread doesn't invoke any wait functions and therefore doesn't receive any boosts.
Threads that own windows receive an additional boost of 2 when they wake up because of windowing activity, such as the arrival of window messages. The windowing system Win32k. The reason for this boost is similar to the previous one to favor interactive applications. Just follow these steps:. If you're running Windows XP or Windows Server select the Advanced tab and ensure that the Programs option is selected; if you're running Windows ensure that the Applications option is selected.
This older version of the Performance tool is needed for this experiment because it can query performance counter values at a faster frequency. The Windows Performance tool has a maximum interval of once per second. In the Instance box, scroll down the list until you see Notepad thread 0.
Click it, click the Add button, and then click the Done button. As in the previous experiment, select Chart from the Options menu. You should see the priority of thread 0 in Notepad at 8, 9, or Because Notepad entered a wait state shortly after it received the boost of 2 that threads in the foreground process receive, it might not yet have decayed from 10 to 9 and then to 8. With Performance Monitor in the foreground, move the mouse across the Notepad window.
Make both windows visible on the desktop. You'll see that the priority sometimes remains at 10 and sometimes at 9, for the reasons just explained. The reason you won't likely catch Notepad at 8 is that it runs so little after receiving the GUI thread boost of 2 that it never experiences more than one priority level of decay before waking up again because of additional windowing activity and receiving the boost of 2 again.
Now bring Notepad to the foreground. You should see the priority rise to 12 and remain there or drop to 11, because it might experience the normal priority decay that occurs for boosted threads on the quantum end because the thread is receiving two boosts: the boost of 2 applied to GUI threads when they wake up to process windowing input and an additional boost of 2 because Notepad is in the foreground. If you then move the mouse over Notepad while it's still in the foreground , you might see the priority drop to 11 or maybe even 10 as it experiences the priority decay that normally occurs on boosted threads as they complete quantums.
However, the boost of 2 that is applied because it's the foreground process remains as long as Notepad remains in the foreground. Imagine the following situation: you have a priority 7 thread that's running, preventing a priority 4 thread from ever receiving CPU time; however, a priority 11 thread is waiting for some resource that the priority 4 thread has locked.
But because the priority 7 thread in the middle is eating up all the CPU time, the priority 4 thread will never run long enough to finish whatever it's doing and release the resource blocking the priority 11 thread.
What does Windows do to address this situation? Once per second, the balance set manager a system thread that exists primarily to perform memory management functions and is described in more detail in Chapter 7 scans the ready queues for any threads that have been in the ready state that is, haven't run for approximately 4 seconds.
If it finds such a thread, the balance set manager boosts the thread's priority to On Windows and Windows XP, the thread quantum is set to twice the process quantum. On Windows Server , the quantum is set to 4 quantum units. Once the quantum is expired, the thread's priority decays immediately to its original base priority.
If the thread wasn't finished and a higher priority thread is ready to run, the decayed thread will return to the ready queue, where it again becomes eligible for another boost if it remains there for another 4 seconds. The balance set manager doesn't actually scan all ready threads every time it runs. To minimize the CPU time it uses, it scans only 16 ready threads; if there are more threads at that priority level, it remembers where it left off and picks up again on the next pass.
Also, it will boost only 10 threads per pass if it finds 10 threads meriting this particular boost which would indicate an unusually busy system , it stops the scan at that point and picks up again on the next pass. Will this algorithm always solve the priority inversion issue? No it's not perfect by any means. But over time, CPU-starved threads should get enough CPU time to finish whatever processing they were doing and reenter a wait state.
In this experiment, we'll see CPU usage change when a thread's priority is boosted. Run Cpustres. Change the activity level of the active thread by default, Thread 1 from Low to Maximum. Change the thread priority from Normal to Below Normal. The screen should look like this:. Again, you need the older version for this experiment because it can query performance counter values at a frequency faster than once per second.
Raise the priority of Performance Monitor to real-time by running Task Manager, clicking the Processes tab, and selecting the Perfmon4. Right-click the process, select Set Priority, and then select Realtime. If you receive a Task Manager Warning message box warning you of system instability, click the Yes button. Run another copy of CPU Stress. In this copy, change the activity level of Thread 1 from Low to Maximum. Now switch back to Performance Monitor.
You should see CPU activity every 4 or so seconds because the thread is boosted to priority Run Windows Media Player or some other audio playback program , and begin playing some audio content. Run Cpustres from the Windows resource kits, and set the activity level of thread 1 to maximum. You should hear the music playback stop as the compute-bound thread begins consuming all available CPU time.
Every so often, you should hear bits of sound as the starved thread in the audio playback process gets boosted to 15 and runs enough to send more data to the sound card. On a uniprocessor system, scheduling is relatively simple: the highest priority thread that wants to run is always running.
On a multiprocessor system, it is more complex, as Windows attempts to schedule threads on the most optimal processor for the thread, taking into account the thread's preferred and previous processors, as well as the configuration of the multiprocessor system.
Therefore, while Windows attempts to schedule the highest priority runnable threads on all available CPUs, it only guarantees to be running the single highest priority thread somewhere. Before we describe the specific algorithms used to choose which threads run where and when, let's examine the additional information Windows maintains to track thread and processor state on multiprocessor systems and the two new types of multiprocessor systems supported by Windows hyperthreaded and NUMA.
As explained in the "Dispatcher Database" section earlier in the chapter, the dispatcher database refers to the information maintained by the kernel to perform thread scheduling. As shown in Figure , on multiprocessor Windows and Windows XP systems, the ready queues and ready summary have the same structure as they do on uniprocessor systems. In addition to the ready queues and the ready summary, Windows maintains two bitmasks that track the state of the processors on the system.
How these bitmasks are used is explained in the upcoming section "Multiprocessor Thread-Scheduling Algorithms". Following are the two bitmasks that Windows maintains:. The active processor mask KeActiveProcessors , which has a bit set for each usable processor on the system This might be less than the number of actual processors if the licensing limits of the version of Windows running supports less than the number of available physical processors.
0コメント