== How does the ''current'' macro helps you access your process-related record? == Note: This entry documents an alternate implementation than the one described in [[FAQ/get_current]]. While browsing the kernel code, you will often stumble accross the use of a macro named ''current'' which is used to access the kernel's process control block (''struct task_struct'') of the currently executing task. Learning how this macro is implemented will satisfy your innate curiosity but also cast some light about the way threads' memory is laid out. Let's start by looking up its definition in ''include/asm-um/current.h'' (kernel version 2.6.16.20). As you can see, we are considering in this FAQ item the UM architecture implementation. UM stands for User Mode, it is the port of the Linux kernel on an architecture which isn't hardware (i386, alpha,...) but software; in fact, it is the port of the Linux kernel onto itself as described at the User Mode Linux (UML) project web page (http://user-mode-linux.sourceforge.net/). In a nutshell, this project enables you to run a Linux virtual machine on top of your regular Linux kernel, refer to the above-mentionned link for more information. Back to our ''current'' macro and its definition in ''include/asm-um/current.h'': {{{ 12 #include "linux/thread_info.h" 13 14 #define current (current_thread_info()->task) 15 }}} When you are using ''current'', you are in fact calling the ''current_thread_info()'' function which returns a ''struct thread_info'' data structure. Inside of that data structure, you are then accessing the ''task'' field which is nothing but a pointer to the ''struct task_struct'' of the process the current thread is part of. Conversely, the ''struct task_struct'' has also a field pointing back to the ''struct thread_info'' named, appropriately, ''thread_info''. In ''thread_info.h'' ''current_thread_info()'' is defined as: {{{ 44 /* how to get the thread information struct from C */ 45 static inline struct thread_info *current_thread_info(void) 46 { 47 struct thread_info *ti; 48 unsigned long mask = PAGE_SIZE * 49 (1 << CONFIG_KERNEL_STACK_ORDER) - 1; 50 ti = (struct thread_info *) (((unsigned long) &ti) & ~mask); 51 return ti; 52 } }}} This implementation shows an interesting fact about where the ''thread_info'' data structure of each thread; it is stored at the bottom of the kernel stack. Let us assume that the kernel stack of your thread grows downward, i.e. the stack begins at high memory address and its top (pointer to the last added element) progresses toward lower memory addresses. In this scenario, the struct thread_info of your thread is located in the 8K of the kernel stack at the lowest address. This means that while the kernel stack begins at the highest address, the thread_info begins at the lowest therefore both are as far as possible from one another within the area of memory holding the kernel stack. When a thread needs to access the ''struct task_struct'' of its process, it first needs to locate its struct ''thread_info'' in the kernel stack and then follow its ''task'' field pointer which leads us in turn to the ''struct task_struct'' of the proces. This is exactly the role of the above code; the ''CONFIG_KERNEL_STACK_ORDER'' constant was introduced to make the code less dependent on the kernel stack size. We are multiplying the size of a page (let's assumed 4096 bytes) by 2 to the power of the size order of the kernel stack (12 in the um architecture port of the Linux kernel). We then subtract 1 and we obtain: {{{ mask = 4096 * 2^12 -1 = 16777215d = 0000 0000 1111 1111 1111 1111 1111 1111 = 0x00FFFFFF }}} What can be the role of ''mask''? Taking into consideration the size of the kernel stack, we determined a binary mask that has zeroes in the most significant bits and ones in the rest. These most significant bits are the ones that won't change in the address of any location within the kernel stack, while the others can be seen as an offset within the kernel stack memory area. This mask is then applied a one complement bitwize operation (~) and use in a logical bitwise AND operation with the address of the ti local variable. Why using the address of ti? When this function executes, it executes on behalf of a thread that is trying to use the current global macro. As with any system call or trap to the kernel, the thread's kernel stack is used to hold the activation records for the function calls made while executing kernel code. This means that the local variable ti' is located on the kernel stack of the thread and its address is therefore within the kernel stack. By applying a one complement to the mask, we obtain a binary mask that has ones in the most significant bits (the ones common to all addresses in the kernel stack) and zeros in the least significant bits (the ones used to hold offsets within the kernel stack). As we do a logical bitwise AND operation between &ti and this mask, we obtain a binary vector containing only the most significant bits and therefore representing the start address of the kernel stack where the struct thread_info is located. Let's take an example: {{{ mask = 0000 0000 1111 1111 1111 1111 1111 1111 ~mask = 1111 1111 0000 0000 0000 0000 0000 0000 &ti = 1100 0101 1110 0101 0001 0101 1010 1011 AND = 1100 0101 0000 0000 0000 0000 0000 0000 }}} We clearly isolated the lowest address in the kernel stack memory area which is exactly what we wanted :) ---- [[CategoryFAQ]]