Size: 3340
Comment:
|
← Revision 9 as of 2017-12-30 01:30:12 ⇥
Size: 3959
Comment: converted to 1.6 markup
|
Deletions are marked like this. | Additions are marked like this. |
Line 3: | Line 3: |
Line 5: | Line 4: |
Line 9: | Line 7: |
Line 11: | Line 8: |
Line 13: | Line 9: |
2. https://www.kernel.org/doc/Documentation/timers/NO_HZ.txt 3. Status of Linux Dynamic Ticks: http://ertl.jp/~shinpei/conf/ospert13/slides/FredericWeisbecker.pdf |
1. https://www.kernel.org/doc/Documentation/timers/NO_HZ.txt 1. Status of Linux Dynamic Ticks: http://ertl.jp/~shinpei/conf/ospert13/slides/FredericWeisbecker.pdf |
Line 20: | Line 13: |
The following tasks are listed in the increasing order of complexity: |
The following tasks are listed in the increasing order of complexity. == Please note that the same task can be claimed by more than one intern == |
Line 26: | Line 19: |
Line 29: | Line 21: |
Download the ebizzy benchmark from http://sourceforge.net/projects/ebizzy/. Compile and run this benchmark a few times. You must see consistent results. Record the number of records read. Download the powertop utility and run it while running the benchmark. Record the %time spent in different idle states. |
Download the ebizzy benchmark from http://sourceforge.net/projects/ebizzy/. Compile and run this benchmark a few times. You must see consistent results. Record the number of records read. Download the powertop utility and run it while running the benchmark. Record the %time spent in different idle states. |
Line 34: | Line 25: |
Line 37: | Line 27: |
Configure two different kernels, one with NO_HZ_IDLE=y and NO_HZ_FULL=y and another with NO_HZ_IDLE=y and NO_HZ_FULL=n. Boot each of these kernels and perform the following test on each. |
|
Line 40: | Line 28: |
Run the same ebizzy benchmark but this time bind the ebizzy benchmark to one cpu using taskset and use irqbalance to direct all interrupts away to another cpu. Run the benchmark a few times until you see consistent results. Record the number of records read. While you run the benchmark, measure the LOC interrupts (Timer interrupts) received in each of the kernels. | git://git.kernel.org/pub/scm/linux/kernel/git/frederic/dynticks-testing.git. Clone this archive. |
Line 42: | Line 30: |
Now compare the records read in each of the above kernels and conclude which performs better and why. == Challenge Problem 3 == Boot the vanilla kernel with the default configuration. git://git.kernel.org/pub/scm/linux/kernel/git/frederic/dynticks-testing.git Clone this archive and follow the instructions in the README file. This test procedure will produce a trace that will allow you to evaluate whether or not you have succeeded in removing OS jitter from your system. As for the workload, run the ebizzy benchmark with two different values for chunk size. Let the first be the default value, while the second be 4096. Compare the traces of both these runs and let us know which of them has the most OS jitter and why. You can use utilities such as top, perf top to observe the system during benchmark runs. They will help you understand the results better. This page will be updated with more tasks soon. |
Run this on two different kernels. The first kernel should be compiled with CONFIG_NO_HZ_IDLE = y in the config file and the second kernel compiled with CONFIG_NO_HZ_FULL = y. |
Line 56: | Line 33: |
== Goal of the Project for the Internship Period == It is to improve Full Dynticks in general. Hence we are quite flexible on the tasks to perform. Now some of them are easier to handle than the others. But there is one task that could be a good start: _ struct timer_list affinity struct timer_list timers are those timers that have a CONFIG_HZ granularity. They are handled by the timer tick: every time the tick fires, we check if we have expired timers from the queue of struct timer_list. And if so, we execute them. Some of these timers are pinned (they execute on a specific CPU only). Some timers are not pinned, which means they can execute on any CPU). This is what we call timer affinity. And this affinity is fine on most workloads. Timers usually execute fast enough that we don't care much about them. But Full Dynticks CPUs don't want to be disturbed by anything. Now if a non-pinned timer decides to execute on a full dynticks CPU, the tick will fire on it in order to run the timer. In order to solve this, we would like to affine non-pinned timers to the CPUs that aren't in full dynticks mode. We are flexible about the timeline. This can be done during the whole OPW internship timeline. Here is a tentative timeline though. Week 1 : Understanding the tasks involved and clearing doubts by interacting with the community and mentors. Week 2 - Week 8: Major task for the project: Improving on _ struct timer_list affinity to affine non-pinned timers to the CPUs that aren't in full dynticks mode. Week 9 : Backup week in case some tasks take more time than expected. Week 10 : Get review from mentors and community. Testing. Code refactoring. Week 11 - Week 13 : Extra tasks related to the project. |
|
Line 57: | Line 71: |
You can email me at: preeti@linux.vnet.ibm.com | |
Line 58: | Line 73: |
You can email me at: preeti@linux.vnet.ibm.com |
About Me
I am a Linux Kernel Developer working at IBM, Linux Technology Center, India. I work in the area of CPU Power Management, specifically enabling deep CPU Idle states and improving the CPU Frequency subsystem on IBM POWER platforms. I have been involved in the discussions around Power Aware Scheduling in the kernel community where we are trying to improve the power efficiency of the kernel. Of late I have been reviewing patches around Dynamic Ticks and am keenly interested in the Full Dynamic Ticks Infrastructure.
Project Details
References
The chapter on "Time Management" at http://www.e-reading.link/bookreader.php/142109/Professional_Linux_kernel_architecture.pdf
Status of Linux Dynamic Ticks: http://ertl.jp/~shinpei/conf/ospert13/slides/FredericWeisbecker.pdf
Tasks
The following tasks are listed in the increasing order of complexity.
Please note that the same task can be claimed by more than one intern
To begin with, download the vanilla kernel.
Challenge Problem 1
Configure two different kernels, one with NO_HZ_IDLE=y and another with NO_HZ_IDLE=n. Boot each of these kernels and perform the following test on each.
Download the ebizzy benchmark from http://sourceforge.net/projects/ebizzy/. Compile and run this benchmark a few times. You must see consistent results. Record the number of records read. Download the powertop utility and run it while running the benchmark. Record the %time spent in different idle states.
Now compare the records read and %time spent in different idle states in both the above kernels and conclude as to which of the two has a better power efficiency.
Challenge Problem 2
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/dynticks-testing.git. Clone this archive.
Run this on two different kernels. The first kernel should be compiled with CONFIG_NO_HZ_IDLE = y in the config file and the second kernel compiled with CONFIG_NO_HZ_FULL = y.
Goal of the Project for the Internship Period
It is to improve Full Dynticks in general. Hence we are quite flexible on the tasks to perform. Now some of them are easier to handle than the others. But there is one task that could be a good start: _ struct timer_list affinity
struct timer_list timers are those timers that have a CONFIG_HZ granularity. They are handled by the timer tick: every time the tick fires, we check if we have expired timers from the queue of struct timer_list. And if so, we execute them.
Some of these timers are pinned (they execute on a specific CPU only). Some timers are not pinned, which means they can execute on any CPU).
This is what we call timer affinity. And this affinity is fine on most workloads. Timers usually execute fast enough that we don't care much about them. But Full Dynticks CPUs don't want to be disturbed by anything. Now if a non-pinned timer decides to execute on a full dynticks CPU, the tick will fire on it in order to run the timer.
In order to solve this, we would like to affine non-pinned timers to the CPUs that aren't in full dynticks mode.
We are flexible about the timeline. This can be done during the whole OPW internship timeline. Here is a tentative timeline though.
Week 1 : Understanding the tasks involved and clearing doubts by interacting with the community and mentors.
Week 2 - Week 8: Major task for the project: Improving on _ struct timer_list affinity to affine non-pinned timers to the CPUs that aren't in full dynticks mode.
Week 9 : Backup week in case some tasks take more time than expected.
Week 10 : Get review from mentors and community. Testing. Code refactoring.
Week 11 - Week 13 : Extra tasks related to the project.
Contact Info
You can email me at: preeti@linux.vnet.ibm.com
My IRC handle is preeti