[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <3f79f293-2a77-43ad-b28d-a21ab59e112c@fnnas.com>
Date: Thu, 5 Feb 2026 00:38:47 +0800
From: "Yu Kuai" <yukuai@...as.com>
To: <linan666@...weicloud.com>, <song@...nel.org>
Cc: <xni@...hat.com>, <linux-raid@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <yangerkun@...wei.com>,
<yi.zhang@...wei.com>, <yukuai@...as.com>
Subject: Re: [PATCH v2 02/14] md: introduce sync_folio_io for folio support in RAID
Hi,
在 2026/1/28 15:56, linan666@...weicloud.com 写道:
> From: Li Nan <linan122@...wei.com>
>
> Prepare for folio support in RAID by introducing sync_folio_io(),
> matching sync_page_io()'s functionality. Differences are:
>
> - Replace input parameter 'page' with 'folio'
> - Replace __bio_add_page() calls with bio_add_folio_nofail()
> - Add new parameter 'off' to prepare for adding a folio to bio in segments,
> e.g. in fix_recovery_read_error()
>
> sync_page_io() will be removed once full folio support is complete.
>
> Signed-off-by: Li Nan <linan122@...wei.com>
> ---
> drivers/md/md.h | 2 ++
> drivers/md/md.c | 27 +++++++++++++++++++++++++++
> 2 files changed, 29 insertions(+)
>
> diff --git a/drivers/md/md.h b/drivers/md/md.h
> index a083f37374d0..410f8a6b75e7 100644
> --- a/drivers/md/md.h
> +++ b/drivers/md/md.h
> @@ -920,6 +920,8 @@ void md_write_metadata(struct mddev *mddev, struct md_rdev *rdev,
> extern int md_super_wait(struct mddev *mddev);
> extern int sync_page_io(struct md_rdev *rdev, sector_t sector, int size,
> struct page *page, blk_opf_t opf, bool metadata_op);
> +extern int sync_folio_io(struct md_rdev *rdev, sector_t sector, int size,
> + int off, struct folio *folio, blk_opf_t opf, bool metadata_op);
> extern void md_do_sync(struct md_thread *thread);
> extern void md_new_event(void);
> extern void md_allow_write(struct mddev *mddev);
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 5df2220b1bd1..b8c8a16cf037 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -1192,6 +1192,33 @@ int sync_page_io(struct md_rdev *rdev, sector_t sector, int size,
> }
> EXPORT_SYMBOL_GPL(sync_page_io);
>
> +int sync_folio_io(struct md_rdev *rdev, sector_t sector, int size, int off,
> + struct folio *folio, blk_opf_t opf, bool metadata_op)
> +{
> + struct bio bio;
> + struct bio_vec bvec;
> +
> + if (metadata_op && rdev->meta_bdev)
> + bio_init(&bio, rdev->meta_bdev, &bvec, 1, opf);
> + else
> + bio_init(&bio, rdev->bdev, &bvec, 1, opf);
> +
> + if (metadata_op)
> + bio.bi_iter.bi_sector = sector + rdev->sb_start;
> + else if (rdev->mddev->reshape_position != MaxSector &&
> + (rdev->mddev->reshape_backwards ==
> + (sector >= rdev->mddev->reshape_position)))
> + bio.bi_iter.bi_sector = sector + rdev->new_data_offset;
> + else
> + bio.bi_iter.bi_sector = sector + rdev->data_offset;
Above code are the same as sync_page_io(), I think you can just remove sync_page_io()
directly in this patch, and convert to sync_folio_io() by passing in page_folio(page).
> + bio_add_folio_nofail(&bio, folio, size, off);
> +
> + submit_bio_wait(&bio);
> +
> + return !bio.bi_status;
Please also change return value to bool, and replace the checking to
bio.bi_status == BLK_STS_OK.
> +}
> +EXPORT_SYMBOL_GPL(sync_folio_io);
> +
> static int read_disk_sb(struct md_rdev *rdev, int size)
> {
> if (rdev->sb_loaded)
--
Thansk,
Kuai
Powered by blists - more mailing lists