[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251217120013.2616531-1-linan666@huaweicloud.com>
Date: Wed, 17 Dec 2025 19:59:58 +0800
From: linan666@...weicloud.com
To: song@...nel.org,
yukuai@...as.com
Cc: linux-raid@...r.kernel.org,
linux-kernel@...r.kernel.org,
xni@...hat.com,
linan666@...weicloud.com,
yangerkun@...wei.com,
yi.zhang@...wei.com
Subject: [PATCH 00/15] folio support for sync I/O in RAID
From: Li Nan <linan122@...wei.com>
This patchset adds folio support to sync operations in raid1/10.
Previously, we used 16 * 4K pages for 64K sync I/O. With this change,
we'll use a single 64K folio instead.
This is the first step towards full folio support in RAID. Going forward,
I will replace the remaining page-based usage with folio.
The patchset was tested with mdadm. Additional fault injection stress tests
were run under file systems.
Li Nan (15):
md/raid1,raid10: clean up of RESYNC_SECTORS
md: introduce sync_folio_io for folio support in RAID
md: use folio for bb_folio
md/raid1: use folio for tmppage
md/raid10: use folio for tmppage
md/raid1,raid10: use folio for sync path IO
md: Clean up folio sync support related code
md/raid1: clean up useless sync_blocks handling in raid1_sync_request
md/raid1: fix IO error at logical block size granularity
md/raid10: fix IO error at logical block size granularity
md/raid1,raid10: clean up resync_fetch_folio
md: clean up resync_free_folio
md/raid1: clean up sync IO size calculation in raid1_sync_request
md/raid10: clean up sync IO size calculation in raid10_sync_request
md/raid1,raid10: fall back to smaller order if sync folio alloc fails
drivers/md/md.h | 5 +-
drivers/md/raid1.h | 2 +-
drivers/md/raid10.h | 2 +-
drivers/md/md.c | 54 ++++++--
drivers/md/raid1-10.c | 81 ++++-------
drivers/md/raid1.c | 219 +++++++++++++-----------------
drivers/md/raid10.c | 303 ++++++++++++++++++++----------------------
7 files changed, 310 insertions(+), 356 deletions(-)
--
2.39.2
Powered by blists - more mailing lists