[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181115085306.9910-11-ming.lei@redhat.com>
Date: Thu, 15 Nov 2018 16:52:57 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Jens Axboe <axboe@...nel.dk>
Cc: linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Ming Lei <ming.lei@...hat.com>,
Dave Chinner <dchinner@...hat.com>,
Kent Overstreet <kent.overstreet@...il.com>,
Mike Snitzer <snitzer@...hat.com>, dm-devel@...hat.com,
Alexander Viro <viro@...iv.linux.org.uk>,
linux-fsdevel@...r.kernel.org, Shaohua Li <shli@...nel.org>,
linux-raid@...r.kernel.org, linux-erofs@...ts.ozlabs.org,
David Sterba <dsterba@...e.com>, linux-btrfs@...r.kernel.org,
"Darrick J . Wong" <darrick.wong@...cle.com>,
linux-xfs@...r.kernel.org, Gao Xiang <gaoxiang25@...wei.com>,
Christoph Hellwig <hch@....de>, Theodore Ts'o <tytso@....edu>,
linux-ext4@...r.kernel.org, Coly Li <colyli@...e.de>,
linux-bcache@...r.kernel.org, Boaz Harrosh <ooo@...ctrozaur.com>,
Bob Peterson <rpeterso@...hat.com>, cluster-devel@...hat.com
Subject: [PATCH V10 10/19] block: loop: pass multi-page bvec to iov_iter
iov_iter is implemented with bvec itererator, so it is safe to pass
multipage bvec to it, and this way is much more efficient than
passing one page in each bvec.
Cc: Dave Chinner <dchinner@...hat.com>
Cc: Kent Overstreet <kent.overstreet@...il.com>
Cc: Mike Snitzer <snitzer@...hat.com>
Cc: dm-devel@...hat.com
Cc: Alexander Viro <viro@...iv.linux.org.uk>
Cc: linux-fsdevel@...r.kernel.org
Cc: Shaohua Li <shli@...nel.org>
Cc: linux-raid@...r.kernel.org
Cc: linux-erofs@...ts.ozlabs.org
Cc: David Sterba <dsterba@...e.com>
Cc: linux-btrfs@...r.kernel.org
Cc: Darrick J. Wong <darrick.wong@...cle.com>
Cc: linux-xfs@...r.kernel.org
Cc: Gao Xiang <gaoxiang25@...wei.com>
Cc: Christoph Hellwig <hch@....de>
Cc: Theodore Ts'o <tytso@....edu>
Cc: linux-ext4@...r.kernel.org
Cc: Coly Li <colyli@...e.de>
Cc: linux-bcache@...r.kernel.org
Cc: Boaz Harrosh <ooo@...ctrozaur.com>
Cc: Bob Peterson <rpeterso@...hat.com>
Cc: cluster-devel@...hat.com
Signed-off-by: Ming Lei <ming.lei@...hat.com>
---
drivers/block/loop.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index bf6bc35aaf88..a3fd418ec637 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -515,16 +515,16 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
struct bio *bio = rq->bio;
struct file *file = lo->lo_backing_file;
unsigned int offset;
- int segments = 0;
+ int nr_bvec = 0;
int ret;
if (rq->bio != rq->biotail) {
- struct req_iterator iter;
+ struct bvec_iter iter;
struct bio_vec tmp;
__rq_for_each_bio(bio, rq)
- segments += bio_segments(bio);
- bvec = kmalloc_array(segments, sizeof(struct bio_vec),
+ nr_bvec += bio_bvecs(bio);
+ bvec = kmalloc_array(nr_bvec, sizeof(struct bio_vec),
GFP_NOIO);
if (!bvec)
return -EIO;
@@ -533,13 +533,14 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
/*
* The bios of the request may be started from the middle of
* the 'bvec' because of bio splitting, so we can't directly
- * copy bio->bi_iov_vec to new bvec. The rq_for_each_segment
+ * copy bio->bi_iov_vec to new bvec. The bio_for_each_bvec
* API will take care of all details for us.
*/
- rq_for_each_segment(tmp, rq, iter) {
- *bvec = tmp;
- bvec++;
- }
+ __rq_for_each_bio(bio, rq)
+ bio_for_each_bvec(tmp, bio, iter) {
+ *bvec = tmp;
+ bvec++;
+ }
bvec = cmd->bvec;
offset = 0;
} else {
@@ -550,11 +551,11 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
*/
offset = bio->bi_iter.bi_bvec_done;
bvec = __bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
- segments = bio_segments(bio);
+ nr_bvec = bio_bvecs(bio);
}
atomic_set(&cmd->ref, 2);
- iov_iter_bvec(&iter, rw, bvec, segments, blk_rq_bytes(rq));
+ iov_iter_bvec(&iter, rw, bvec, nr_bvec, blk_rq_bytes(rq));
iter.iov_offset = offset;
cmd->iocb.ki_pos = pos;
--
2.9.5
Powered by blists - more mailing lists