[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <23b538a7-904d-4fc3-89ff-9a79904c0683@nvidia.com>
Date: Mon, 10 Nov 2025 21:34:37 +0000
From: Chaitanya Kulkarni <chaitanyak@...dia.com>
To: Caleb Sander Mateos <csander@...estorage.com>
CC: Jens Axboe <axboe@...nel.dk>, Damien Le Moal <dlemoal@...nel.org>,
Christoph Hellwig <hch@....de>, "linux-block@...r.kernel.org"
<linux-block@...r.kernel.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, Ming Lei <ming.lei@...hat.com>, Keith Busch
<kbusch@...nel.org>
Subject: Re: [PATCH 1/2] loop: use blk_rq_nr_phys_segments() instead of
iterating bvecs
On 11/10/25 08:47, Caleb Sander Mateos wrote:
> On Sun, Nov 9, 2025 at 4:20 AM Ming Lei <ming.lei@...hat.com> wrote:
>> On Sat, Nov 08, 2025 at 04:01:00PM -0700, Caleb Sander Mateos wrote:
>>> The number of bvecs can be obtained directly from struct request's
>>> nr_phys_segments field via blk_rq_nr_phys_segments(), so use that
>>> instead of iterating over the bvecs an extra time.
>>>
>>> Signed-off-by: Caleb Sander Mateos <csander@...estorage.com>
>>> ---
>>> drivers/block/loop.c | 5 +----
>>> 1 file changed, 1 insertion(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
>>> index 13ce229d450c..8096478fad45 100644
>>> --- a/drivers/block/loop.c
>>> +++ b/drivers/block/loop.c
>>> @@ -346,16 +346,13 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
>>> struct request *rq = blk_mq_rq_from_pdu(cmd);
>>> struct bio *bio = rq->bio;
>>> struct file *file = lo->lo_backing_file;
>>> struct bio_vec tmp;
>>> unsigned int offset;
>>> - int nr_bvec = 0;
>>> + unsigned short nr_bvec = blk_rq_nr_phys_segments(rq);
>>> int ret;
>>>
>>> - rq_for_each_bvec(tmp, rq, rq_iter)
>>> - nr_bvec++;
>>> -
>> The two may not be same, since one bvec can be splitted into multiple segments.
> Hmm, io_buffer_register_bvec() already assumes
> blk_rq_nr_phys_segments() returns the number of bvecs iterated by
> rq_for_each_bvec(). I asked about this on the patch adding it, but
> Keith assures me they match:
> https://lore.kernel.org/io-uring/Z7TmrB4_aBnZdFbo@kbusch-mbp/.
>
> Best,
> Caleb
>
Perhaps I don't understand how they will be same ? can share more details.
Segment Splitting :-
nr_bvec=1, blk_rq_nr_phys_segments=2 (see below)
- ONE large bvec split into MULTIPLE physical segments
- Patch above allocate array[2], but iterate and fill only array[0] ?
*[ 6155.673749] nullb_bio: 128K bio as ONE bvec: sector=0, size=131072, op=WRITE*
*[ 6155.673846] null_blk: #### null_handle_data_transfer:1375*
*[ 6155.673850] null_blk: nr_bvec=1 blk_rq_nr_phys_segments=2*
*[ 6155.674263] null_blk: #### null_handle_data_transfer:1375*
*[ 6155.674267] null_blk: nr_bvec=1 blk_rq_nr_phys_segments=1*
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index 1fe3373431ca..74ab0ba53f62 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1364,7 +1364,18 @@ static blk_status_t null_handle_data_transfer(struct nullb_cmd *cmd,
unsigned int max_bytes = nr_sectors << SECTOR_SHIFT;
unsigned int transferred_bytes = 0;
struct req_iterator iter;
+ struct req_iterator rq_iter;
struct bio_vec bvec;
+ int nr_bvec = 0;
+
+ rq_for_each_bvec(bvec, rq, rq_iter)
+ nr_bvec++;
+
+ if (req_op(rq) == REQ_OP_WRITE) {
+ pr_info("#### %s:%d\n", __func__, __LINE__);
+ pr_info("nr_bvec=%d blk_rq_nr_phys_segments=%u\n",
+ nr_bvec, blk_rq_nr_phys_segments(rq));
+ }
spin_lock_irq(&nullb->lock);
rq_for_each_segment(bvec, rq, iter) {
-ck
Powered by blists - more mailing lists