Size: 1999
Comment:
|
Size: 2078
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 7: | Line 7: |
* Convert page_vma_mapped to folios * Convert rmap to folios (depends on page_vma_mapped) * Convert vmscan to folios (depends on rmap) |
* Convert page_vma_mapped to PFNs (5.18) * Convert rmap to folios (5.18) * Convert vmscan to folios (partly done for 5.18) |
Line 16: | Line 16: |
* Convert the a_ops to take folios instead of pages | * Convert the a_ops to take folios instead of pages (3 aops done for 5.18) |
Line 49: | Line 49: |
* Make page cache better at writethrough for O_SYNC (and related) |
I have "a few" projects I'm working on / intending to work on in the near future.
Folios is the big one. This has sub-projects:
- MM:
- Get large folio support in (5.18)
- Convert GUP to folios (5.18)
- Convert page_vma_mapped to PFNs (5.18)
- Convert rmap to folios (5.18)
- Convert vmscan to folios (partly done for 5.18)
- Move split_huge_page() to non-THP code
- Get rid of all thp_size(), thp_order() and thp_nr_pages() calls (probably?)
- FS:
- Adapt NFS to use large folios
- Adapt btrfs to use large folios
- Adapt CIFS to use large folios
- Convert the a_ops to take folios instead of pages (3 aops done for 5.18)
Tangential to folios:
- Phyr
- Slab (Vlastimil took this over. merged 5.17)
- net pool as its own type
- pt pages as its own type
- zspage as its own type
- Can we make mapcount more sensible or disappear entirely?
Remove aops->readpages
Implement iomap ->writepages properly
Unrelated to folios:
- Maple Tree (Liam is doing most of the work)
- Shrinking struct mutex (Vishal has taken this on)
- Removing PG_private
Synchronous ->readpage
- Broadcast readpage errors to all waiters
Implement ->readahead for squashfs (Hsin-Yi)
- Make the last argument to read_mapping_page() / read_mapping_folio() a struct file ptr
- Slab Sevenths (Bill is attacking this one)
- NUMA-aware zero page (Bill is working on this one too)
- NUMA text pages in the page cache (Linus seems sceptical)
- Usercopy (5.18. Sent to Kees)
If we check ->mapping for order-0 pages, is that a sure sign of individual allocation?
- Big Buckets (or integrate with Maple Tree)
- Readahead for compressed filesystems in general
- Fix XArray memory leak when memory allocation fails
- Removing b_end_io (Jitendra)
- Use an iov_iter in proc/vmcore (sent for 5.17? 5.18?)
- Enable -Wshadow (needs header file cleanups)
- Share memory when caching reflinked files
- Make page cache better at writethrough for O_SYNC (and related)