[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240607145902.1137853-1-kernel@pankajraghav.com>
Date: Fri, 7 Jun 2024 14:58:51 +0000
From: "Pankaj Raghav (Samsung)" <kernel@...kajraghav.com>
To: david@...morbit.com,
djwong@...nel.org,
chandan.babu@...cle.com,
brauner@...nel.org,
akpm@...ux-foundation.org,
willy@...radead.org
Cc: mcgrof@...nel.org,
linux-mm@...ck.org,
hare@...e.de,
linux-kernel@...r.kernel.org,
yang@...amperecomputing.com,
Zi Yan <zi.yan@...t.com>,
linux-xfs@...r.kernel.org,
p.raghav@...sung.com,
linux-fsdevel@...r.kernel.org,
kernel@...kajraghav.com,
hch@....de,
gost.dev@...sung.com,
cl@...amperecomputing.com,
john.g.garry@...cle.com
Subject: [PATCH v7 00/11] enable bs > ps in XFS
From: Pankaj Raghav <p.raghav@...sung.com>
This is the seventh version of the series that enables block size > page size
(Large Block Size) in XFS targetted for inclusion in 6.11.
The context and motivation can be seen in cover letter of the RFC v1 [0].
We also recorded a talk about this effort at LPC [1], if someone would
like more context on this effort.
The major change on this v6 we retry getting a folio and we enable
warning if we failed to get a folio in __filemap_get_folio if the
order <= min_order (Patch 3)[7].
A lot of emphasis has been put on testing using kdevops, starting with an XFS
baseline [3]. The testing has been split into regression and progression.
Regression testing:
In regression testing, we ran the whole test suite to check for regressions on
existing profiles due to the page cache changes.
I also ran split_huge_page_test selftest on XFS filesystem to check for
huge page splits in min order chunks is done correctly.
No regressions were found with these patches added on top.
Progression testing:
For progression testing, we tested for 8k, 16k, 32k and 64k block sizes. To
compare it with existing support, an ARM VM with 64k base page system (without
our patches) was used as a reference to check for actual failures due to LBS
support in a 4k base page size system.
There are some tests that assumes block size < page size that needs to be fixed.
We have a tree with fixes for xfstests [4], most of the changes have been posted
already, and only a few minor changes need to be posted. Already part of these
changes has been upstreamed to fstests, and new tests have also been written and
are out for review, namely for mmap zeroing-around corner cases, compaction
and fsstress races on mm, and stress testing folio truncation on file mapped
folios.
No new failures were found with the LBS support.
We've done some preliminary performance tests with fio on XFS on 4k block size
against pmem and NVMe with buffered IO and Direct IO on vanilla Vs + these
patches applied, and detected no regressions.
We also wrote an eBPF tool called blkalgn [5] to see if IO sent to the device
is aligned and at least filesystem block size in length.
For those who want this in a git tree we have this up on a kdevops
large-block-minorder-for-next-v7 tag [6].
[0] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
[1] https://www.youtube.com/watch?v=ar72r5Xf7x4
[2] https://lkml.kernel.org/r/20240501153120.4094530-1-willy@infradead.org
[3] https://github.com/linux-kdevops/kdevops/blob/master/docs/xfs-bugs.md
489 non-critical issues and 55 critical issues. We've determined and reported
that the 55 critical issues have all fall into 5 common XFS asserts or hung
tasks and 2 memory management asserts.
[4] https://github.com/linux-kdevops/fstests/tree/lbs-fixes
[5] https://github.com/iovisor/bcc/pull/4813
[6] https://github.com/linux-kdevops/linux/
[7] https://lore.kernel.org/linux-kernel/Zl20pc-YlIWCSy6Z@casper.infradead.org/#t
Changes since v6:
- Warn users if we can't get a min order folio in __filemap_get_folio().
- Added iomap_dio_init() function and moved zero buffer init into that.
- Modified split_huge_pages_pid() to also consider non-anonymous memory
and removed condition for anonymous memory in split_huge_pages_file().
- Collected RVB from Hannes.
Dave Chinner (1):
xfs: use kvmalloc for xattr buffers
Hannes Reinecke (1):
readahead: rework loop in page_cache_ra_unbounded()
Luis Chamberlain (1):
mm: split a folio in minimum folio order chunks
Matthew Wilcox (Oracle) (1):
fs: Allow fine-grained control of folio sizes
Pankaj Raghav (7):
filemap: allocate mapping_min_order folios in the page cache
readahead: allocate folios with mapping_min_order in readahead
filemap: cap PTE range to be created to allowed zero fill in
folio_map_range()
iomap: fix iomap_dio_zero() for fs bs > system page size
xfs: expose block size in stat
xfs: make the calculation generic in xfs_sb_validate_fsb_count()
xfs: enable block size larger than page size support
fs/internal.h | 5 ++
fs/iomap/buffered-io.c | 6 ++
fs/iomap/direct-io.c | 26 ++++++++-
fs/xfs/libxfs/xfs_attr_leaf.c | 15 ++---
fs/xfs/libxfs/xfs_ialloc.c | 5 ++
fs/xfs/libxfs/xfs_shared.h | 3 +
fs/xfs/xfs_icache.c | 6 +-
fs/xfs/xfs_iops.c | 2 +-
fs/xfs/xfs_mount.c | 11 +++-
fs/xfs/xfs_super.c | 18 +++---
include/linux/huge_mm.h | 14 +++--
include/linux/pagemap.h | 106 +++++++++++++++++++++++++++++-----
mm/filemap.c | 38 +++++++-----
mm/huge_memory.c | 55 ++++++++++++++++--
mm/readahead.c | 98 ++++++++++++++++++++++++-------
15 files changed, 330 insertions(+), 78 deletions(-)
base-commit: d97496ca23a2d4ee80b7302849404859d9058bcd
--
2.44.1
Powered by blists - more mailing lists