[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240429190500.30979-1-ryncsn@gmail.com>
Date: Tue, 30 Apr 2024 03:04:48 +0800
From: Kairui Song <ryncsn@...il.com>
To: linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Huang, Ying" <ying.huang@...el.com>,
Matthew Wilcox <willy@...radead.org>,
Chris Li <chrisl@...nel.org>,
Barry Song <v-songbaohua@...o.com>,
Ryan Roberts <ryan.roberts@....com>,
Neil Brown <neilb@...e.de>,
Minchan Kim <minchan@...nel.org>,
Hugh Dickins <hughd@...gle.com>,
David Hildenbrand <david@...hat.com>,
Yosry Ahmed <yosryahmed@...gle.com>,
linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org,
Kairui Song <kasong@...cent.com>
Subject: [PATCH v3 00/12] mm/swap: clean up and optimize swap cache index
From: Kairui Song <kasong@...cent.com>
This is based on latest mm-unstable. Patch 1/12 might not be needed if
f2fs converted .readahead to use folio, I included it for easier test
and review.
Currently we use one swap_address_space for every 64M chunk to reduce lock
contention, this is like having a set of smaller swap files inside one
big swap file. But when doing swap cache look up or insert, we are
still using the offset of the whole large swap file. This is OK for
correctness, as the offset (key) is unique.
But Xarray is specially optimized for small indexes, it creates the
redix tree levels lazily to be just enough to fit the largest key
stored in one Xarray. So we are wasting tree nodes unnecessarily.
For 64M chunk it should only take at most 3 level to contain everything.
But we are using the offset from the whole swap file, so the offset (key)
value will be way beyond 64M, and so will the tree level.
Optimize this by reduce the swap cache search space into 64M scope.
Test with `time memhog 128G` inside a 8G memcg using 128G swap (ramdisk
with SWP_SYNCHRONOUS_IO dropped, tested 3 times, results are stable. The
test result is similar but the improvement is smaller if SWP_SYNCHRONOUS_IO
is enabled, as swap out path can never skip swap cache):
Before:
6.07user 250.74system 4:17.26elapsed 99%CPU (0avgtext+0avgdata 8373376maxresident)k
0inputs+0outputs (55major+33555018minor)pagefaults 0swaps
After (+1.8% faster):
6.08user 246.09system 4:12.58elapsed 99%CPU (0avgtext+0avgdata 8373248maxresident)k
0inputs+0outputs (54major+33555027minor)pagefaults 0swaps
Similar result with MySQL and sysbench using swap:
Before:
94055.61 qps
After (+0.8% faster):
94834.91 qps
There is alse a very slight drop of radix tree node slab usage:
Before: 303952K
After: 302224K
For this series:
There are multiple places that expect mixed type of pages (page cache or
swap cache), eg. migration, huge memory split; There are four helpers
for that:
- page_index
- page_file_offset
- folio_index
- folio_file_pos
To keep the code clean and compatible, this series first cleaned up
usage of them.
First page_file_offset and folio_file_pos are historical helpes that can
be simply dropped after clean up. And page_index can be all converted to
folio_index or folio->index.
Then introduce two new helpers swap_cache_index and swap_dev_pos
for swap. Replace swp_offset with swap_cache_index when used to
retrieve folio from swap cache, and use swap_dev_pos when needed
to retrieve the device position of a swap entry. This way,
swap_cache_index can return the optimized value with no compatibility
issue.
Idealy, in the future, we may want to reduce SWAP_ADDRESS_SPACE_SHIFT
from 14 to 12: Default Xarray chunk offset is 6, so we have 3 level
trees instead of 2 level trees just for 2 extra bits. But swap cache
is based on address_space struct, with 4 times more metadata sparsely
distributed in memory it waste more cacheline, the performance gain
from this series is almost canceled according to my test. So first,
just have a cleaner seperation of offsets and smaller search space.
Patch 1/12 - 11/12: Clean up usage of above helpers.
Patch 11/12: Apply the optmization.
V2: https://lore.kernel.org/linux-mm/20240423170339.54131-1-ryncsn@gmail.com/
Update from V2:
- Clean up usage of page_file_offset and folio_file_pos [Matthew Wilcox]
https://lore.kernel.org/linux-mm/ZiiFHTwgu8FGio1k@casper.infradead.org/
- Use folio in nilfs_bmap_data_get_key [Ryusuke Konishi]
V1: https://lore.kernel.org/all/20240417160842.76665-1-ryncsn@gmail.com/
Update from V1:
- Convert more users to use folio directly when possible [Matthew Wilcox]
- Rename swap_file_pos to swap_dev_pos [Huang, Ying]
- Update comments and commit message.
- Adjust headers and add dummy function to fix build error.
This series is part of effort to reduce swap cache overhead, and ultimately
remove SWP_SYNCHRONOUS_IO and unify swap cache usage as proposed before:
https://lore.kernel.org/lkml/20240326185032.72159-1-ryncsn@gmail.com/
Kairui Song (12):
f2fs: drop usage of page_index
nilfs2: drop usage of page_index
ceph: drop usage of page_index
NFS: remove nfs_page_lengthg and usage of page_index
cifs: drop usage of page_file_offset
afs: drop usage of folio_file_pos
netfs: drop usage of folio_file_pos
nfs: drop usage of folio_file_pos
mm/swap: get the swap file offset directly
mm: remove page_file_offset and folio_file_pos
mm: drop page_index and convert folio_index to use folio
mm/swap: reduce swap cache search space
fs/afs/dir.c | 6 +++---
fs/afs/dir_edit.c | 4 ++--
fs/ceph/dir.c | 2 +-
fs/ceph/inode.c | 2 +-
fs/f2fs/data.c | 2 +-
fs/netfs/buffered_read.c | 4 ++--
fs/netfs/buffered_write.c | 2 +-
fs/nfs/file.c | 2 +-
fs/nfs/internal.h | 19 -------------------
fs/nfs/nfstrace.h | 4 ++--
fs/nfs/write.c | 6 +++---
fs/nilfs2/bmap.c | 3 +--
fs/smb/client/file.c | 2 +-
include/linux/mm.h | 13 -------------
include/linux/pagemap.h | 25 ++++---------------------
mm/huge_memory.c | 2 +-
mm/memcontrol.c | 2 +-
mm/mincore.c | 2 +-
mm/page_io.c | 6 +++---
mm/shmem.c | 2 +-
mm/swap.h | 24 ++++++++++++++++++++++++
mm/swap_state.c | 12 ++++++------
mm/swapfile.c | 11 +++++------
23 files changed, 65 insertions(+), 92 deletions(-)
--
2.44.0
Powered by blists - more mailing lists