[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1477728600-12938-41-git-send-email-tom.leiming@gmail.com>
Date: Sat, 29 Oct 2016 16:08:39 +0800
From: Ming Lei <tom.leiming@...il.com>
To: Jens Axboe <axboe@...com>, linux-kernel@...r.kernel.org
Cc: linux-block@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Christoph Hellwig <hch@...radead.org>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Ming Lei <tom.leiming@...il.com>, Jens Axboe <axboe@...nel.dk>
Subject: [PATCH 40/60] blk-merge: compute bio->bi_seg_front_size efficiently
It is enough to check and compute bio->bi_seg_front_size just
after the 1st segment is found, but current code checks that
for each bvec, which is inefficient.
This patch follows the way in __blk_recalc_rq_segments()
for computing bio->bi_seg_front_size, and it is more efficient
and code becomes more readable too.
Signed-off-by: Ming Lei <tom.leiming@...il.com>
---
block/blk-merge.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 266c94d1d82f..465d9c65cb41 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -157,22 +157,21 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
bvprvp = &bvprv;
sectors += bv.bv_len >> 9;
- if (nsegs == 1 && seg_size > front_seg_size)
- front_seg_size = seg_size;
continue;
}
new_segment:
if (nsegs == queue_max_segments(q))
goto split;
+ if (nsegs == 1 && seg_size > front_seg_size)
+ front_seg_size = seg_size;
+
nsegs++;
bvprv = bv;
bvprvp = &bvprv;
seg_size = bv.bv_len;
sectors += bv.bv_len >> 9;
- if (nsegs == 1 && seg_size > front_seg_size)
- front_seg_size = seg_size;
}
do_split = false;
@@ -185,6 +184,8 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
bio = new;
}
+ if (nsegs == 1 && seg_size > front_seg_size)
+ front_seg_size = seg_size;
bio->bi_seg_front_size = front_seg_size;
if (seg_size > bio->bi_seg_back_size)
bio->bi_seg_back_size = seg_size;
--
2.7.4
Powered by blists - more mailing lists