/!\ /!\ /!\ NOTE /!\ /!\ /!\

If the new kernel is released before the previous message has been removed, it means that this article isn't entirely complete. After reading it, you can satisfy your curiosity with other new sources such as


1. Prominent features

1.1. A scalable block layer for high performance SSD storage

Traditional hard disks have defined for decades the design requirements that operating systems use to communicate applications with the storage device drivers. With the advent of modern solid-state disks (SSD), past assumptions are no longer valid. Linux had a single coarse lock design for protecting the IO request queue, which can achieve an IO submission rate of around 800.000 IOs per second, regardless of how many cores are used to submit IOs. This was more than enough for traditional magnetic hard disks, whose IO submission rate in random accesses is in the hundreds, but it is not enough for modern SSD disks, which can achieve a rate close to 1 million, and are improving fast with every new generation. It is also unfit for the modern multicore world.

This release includes a new design for the Linux block layer, based on two levels of queues: one level of per-CPU queues for submitting IO, which then funnel down into a second level of hardware submission queues. The mapping between submission queues and hardware queues might be 1:1 or N:M, depending on hardware support and configuration. Experiments shown that this design can achieve many millions of IOs per second, leveraging the new capabilities of NVM-Express or high-end PCI-E devices and multicore CPUs, while still providing the common interface and convenience features of the block layer.

Paper: [ Linux Block IO: Introducing Multi-queue SSD Access on Multi-core Systems]

Recommended LWN article: [ The multiqueue block layer]

Code: [ commit]

1.2. nftables, the successor of iptables

iptables has a number of limitations both at the functional and code design level: problems with the system update rules, code duplication, which cause problems for code maintenance and for users. nftables is a new packet filtering framework that solves these problems, while providing backwards compatibility for current iptable users.

The core of the nftables design is a pseudo-virtual machine. A [ userspace utility] interprets the rule-set provided by the user, it compiles it to pseudo-bytecode and then it transfers it to the kernel. This approach can replace thousands of lines of code, since the bytecode instruction set can express the packet selectors for all existing protocols. Because the userspace utility parses the protocols to bytecode, it is no longer necessary a specific extension in kernel-space for each match, which means that users are likely not need to upgrade the kernel to obtain new matches and features, userspace upgrades will provide them. There is also [ a new library] for utilities that need to interact with the firewall.

nftables provides backwards iptables compatibility. There are [ new iptables/iptables utilities] that translate iptables rules to nftables bytecode, and it is also possible to use and add new xtable modules. As a bonus, these new utilities provide features that weren't possible with the old iptables design: notification for changes in tables/chains, better incremental rule update support, and the ability to enable/disable the chains per table.

How-to of the new utility and syntax is available [ here]

Recommended LWN article: [ The return of nftables]

Video talk about nftables: ([ slides])

Project page and utility source code:

Code: [ commit], [ commit]

1.3. Radeon: power management enabled by default, automatic GPU switching, Hawaii support

Linux 3.11 [ added] power management support for many AMD Radeon devices. The power management support provides improved power consumption, which is critical for battery powered devices, but it is also a requirement to provide good high-end performance, as it provides the ability to reclock to GPU to higher power states in GPUs and APUs that default to slower clock speeds.

This support had to be enabled with a module parameter. This release enables power management by default for lots of AMD Radeon hardware: BTC asics, SI asics, SUMO/PALM APUs, evergreen asics, r7xx asics, hawaii. Code: [ commit], [ commit], [ commit], [ commit], [ commit], [ commit]

Linux 3.12 added support for automatic GPU switching in laptops with dual GPUs. This release adds support for this feature in AMD Radeon hardware. Code: [ commit]

This release adds support for [ R9 290X] "Hawaii" devices. Code: [ commit]

1.4. Power capping framework

This release includes a framework that allow to set power consumption limits to devices that support it. It has been designed around the Intel RAPL (Running Average Power Limit) mechanism available in the latest Intel processors (Sandy Bridge and later, many devices will also be added RAPL support in the future). This framework provides a consistent interface between the kernel and user space that allows power capping drivers to expose their settings to user space in a uniform way. You can see the Documentation [ here]

Code: [ (commit 1], [ 2], [ 3], [ 4)]

1.5. Improved performance in NUMA systems

Modern hardware with many CPUs usually have a memory controller for each CPU. While all CPUs can access to any memory direction, accessing the portions of memory addressed from a local memory controller is faster than accessing portions of memory attached to the controllers of other CPUs. This is called NUMA - "non-uniform memory architecture". Because the performance profile is different depending on the locality of the memory accesses, it's important that the operating system schedules a process to run in the same CPU where the memory it will access is mapped.

The way Linux handles these situations was deficient; Linux 3.8 [ included a new NUMA foundation] that would allow to build smarter NUMA policies in future releases. This release includes many of such policies that attempt to put a process near its memory, and can handle cases such as shared pages between processes or transparent huge pages. New sysctls have been added to enable/disable and tune the NUMA scheduling (see documentation [ here])

Recommended LWN article: [ NUMA scheduling progress]

1.6. Improved page table access scalability in hugepage workloads

The Linux kernels tracks information about each memory page in a data structure called page table. In workloads that use hugepages, the lock used to protect some parts of the table has become a lock contention. This release uses finer grained locking for these parts, improving the page table access scalability in threaded hugepage workloads. For more details, see the recommended LWN article.

Recommended LWN article: [ Split PMD locks]

Code: [ commit], [ commit]

1.7. Squashfs performance improved

Squashfs, the read-only filesystem used by most live distros, installers, and some embedded Linux distributions, has got important improvements that dramatically increase performance in workloads with multiple parallel reads. One of them is the direct decompression of data into the Linux page cache, which avoids a copy of the data and eliminates the single lock used to protect the buffer. The other one is multithreaded decompression.

Code: [ (commit 1], [ 2], [ 3)]

1.8. TCP Fast Open enabled by default

TCP Fast Open is an optimization to the process of stablishing a TCP connection that allows the elimination of one round time trip from certain kinds of TCP conversation, which can improve the load speed of web pages. In [ Linux 3.6] and [ Linux 3.7], support was added for this feature, which requires userspace support. This release enables TCP Fast Open by default.

1.9. NFC payments support

This release implements support for the [ Secure Element]. A netlink API is available to enable, disable and discover NFC attached (embedded or UICC ones) secure elements. With some userspace help, this allows to support NFC payments, used to implement financial transactions. Only the pn544 driver currently supports this API.

Code: [ commit]

1.10. Support for the High-availability Seamless Redundancy protocol

[ High-availability Seamless Redundancy] (HSR) is a redundancy protocol for Ethernet. It provides instant failover redundancy for such networks. It requires a special network topology where all nodes are connected in a ring (each node having two physical network interfaces). It is suited for applications that demand high availability and very short reaction time.

Code: [ commit]

2. Drivers and architectures

All the driver and architecture-specific changes can be found in the [ Linux_3.13-DriversArch page]

3. Core

4. Memory management

5. Block layer

6. File systems

7. Networking

8. Crypto

9. Virtualization

10. Security

11. Tracing/perf

12. Other news sites that track the changes of this release

KernelNewbies: Linux_3.13 (last edited 2014-01-17 19:48:58 by diegocalleja)