[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240529134509.120826-1-kernel@pankajraghav.com>
Date: Wed, 29 May 2024 15:44:58 +0200
From: "Pankaj Raghav (Samsung)" <kernel@...kajraghav.com>
To: david@...morbit.com,
chandan.babu@...cle.com,
akpm@...ux-foundation.org,
brauner@...nel.org,
willy@...radead.org,
djwong@...nel.org
Cc: linux-kernel@...r.kernel.org,
hare@...e.de,
john.g.garry@...cle.com,
gost.dev@...sung.com,
yang@...amperecomputing.com,
p.raghav@...sung.com,
cl@...amperecomputing.com,
linux-xfs@...r.kernel.org,
hch@....de,
mcgrof@...nel.org,
linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org
Subject: [PATCH v6 00/11] enable bs > ps in XFS
From: Pankaj Raghav <p.raghav@...sung.com>
This is the sixth version of the series that enables block size > page size
(Large Block Size) in XFS targetted for inclusion in 6.11.
The context and motivation can be seen in cover letter of the RFC v1 [0].
We also recorded a talk about this effort at LPC [1], if someone would
like more context on this effort.
The major change on this v6 is max order is respected by the page cache
and iomap direct IO zeroing will be using 64k buffer instead of looping
through ZERO_PAGE.
A lot of emphasis has been put on testing using kdevops, starting with an XFS
baseline [3]. The testing has been split into regression and progression.
Regression testing:
In regression testing, we ran the whole test suite to check for regressions on
existing profiles due to the page cache changes.
No regressions were found with these patches added on top.
Progression testing:
For progression testing, we tested for 8k, 16k, 32k and 64k block sizes. To
compare it with existing support, an ARM VM with 64k base page system (without
our patches) was used as a reference to check for actual failures due to LBS
support in a 4k base page size system.
There are some tests that assumes block size < page size that needs to be fixed.
We have a tree with fixes for xfstests [4], most of the changes have been posted
already, and only a few minor changes need to be posted. Already part of these
changes has been upstreamed to fstests, and new tests have also been written and
are out for review, namely for mmap zeroing-around corner cases, compaction
and fsstress races on mm, and stress testing folio truncation on file mapped
folios.
No new failures were found with the LBS support.
We've done some preliminary performance tests with fio on XFS on 4k block size
against pmem and NVMe with buffered IO and Direct IO on vanilla Vs + these
patches applied, and detected no regressions.
We also wrote an eBPF tool called blkalgn [5] to see if IO sent to the device
is aligned and at least filesystem block size in length.
For those who want this in a git tree we have this up on a kdevops
20240503-large-block-minorder branch [6].
[0] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
[1] https://www.youtube.com/watch?v=ar72r5Xf7x4
[2] https://lkml.kernel.org/r/20240501153120.4094530-1-willy@infradead.org
[3] https://github.com/linux-kdevops/kdevops/blob/master/docs/xfs-bugs.md
489 non-critical issues and 55 critical issues. We've determined and reported
that the 55 critical issues have all fall into 5 common XFS asserts or hung
tasks and 2 memory management asserts.
[4] https://github.com/linux-kdevops/fstests/tree/lbs-fixes
[5] https://github.com/iovisor/bcc/pull/4813
[6] https://github.com/linux-kdevops/linux/tree/large-block-minorder-next-20240528
Changes since v6:
- Max order is resptected by the page cache
- No LBS support for V4 format in XFS
- Use a 64k zeroed buffer for iomap direct io zeroing
Dave Chinner (1):
xfs: use kvmalloc for xattr buffers
Hannes Reinecke (1):
readahead: rework loop in page_cache_ra_unbounded()
Luis Chamberlain (1):
mm: split a folio in minimum folio order chunks
Matthew Wilcox (Oracle) (1):
fs: Allow fine-grained control of folio sizes
Pankaj Raghav (7):
filemap: allocate mapping_min_order folios in the page cache
readahead: allocate folios with mapping_min_order in readahead
filemap: cap PTE range to be created to allowed zero fill in
folio_map_range()
iomap: fix iomap_dio_zero() for fs bs > system page size
xfs: expose block size in stat
xfs: make the calculation generic in xfs_sb_validate_fsb_count()
xfs: enable block size larger than page size support
fs/internal.h | 8 +++
fs/iomap/buffered-io.c | 5 ++
fs/iomap/direct-io.c | 9 ++-
fs/xfs/libxfs/xfs_attr_leaf.c | 15 ++---
fs/xfs/libxfs/xfs_ialloc.c | 5 ++
fs/xfs/libxfs/xfs_shared.h | 3 +
fs/xfs/xfs_icache.c | 6 +-
fs/xfs/xfs_iops.c | 2 +-
fs/xfs/xfs_mount.c | 11 +++-
fs/xfs/xfs_super.c | 18 +++---
include/linux/huge_mm.h | 14 +++--
include/linux/pagemap.h | 106 +++++++++++++++++++++++++++++-----
mm/filemap.c | 36 ++++++++----
mm/huge_memory.c | 50 +++++++++++++++-
mm/readahead.c | 98 ++++++++++++++++++++++++-------
15 files changed, 310 insertions(+), 76 deletions(-)
base-commit: 6dc544b66971c7f9909ff038b62149105272d26a
--
2.34.1
Powered by blists - more mailing lists