[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140228233039.GA2739@moria.home.lan>
Date: Fri, 28 Feb 2014 15:30:39 -0800
From: Kent Overstreet <kmo@...erainc.com>
To: Matthew Wilcox <willy@...ux.intel.com>
Cc: axboe@...nel.dk, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, Neil Brown <neilb@...e.de>,
Alasdair Kergon <agk@...hat.com>, dm-devel@...hat.com,
Lars Ellenberg <drbd-dev@...ts.linbit.com>,
drbd-user@...ts.linbit.com,
Asai Thambi S P <asamymuthupa@...ron.com>,
Sam Bradshaw <sbradshaw@...ron.com>,
linux-nvme@...ts.infradead.org, Jiri Kosina <jkosina@...e.cz>,
Geoff Levand <geoff@...radead.org>, Jim Paris <jim@...n.com>,
Joshua Morris <josh.h.morris@...ibm.com>,
Philip Kelleher <pjk1939@...ux.vnet.ibm.com>,
Minchan Kim <minchan@...nel.org>,
Nitin Gupta <ngupta@...are.org>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Peng Tao <bergwolf@...il.com>
Subject: Re: [PATCH 1/9] block: Make generic_make_request handle arbitrary
sized bios
On Thu, Feb 27, 2014 at 12:22:54PM -0500, Matthew Wilcox wrote:
> On Wed, Feb 26, 2014 at 03:39:49PM -0800, Kent Overstreet wrote:
> > We do this by adding calls to blk_queue_split() to the various
> > make_request functions that need it - a few can already handle arbitrary
> > size bios. Note that we add the call _after_ any call to blk_queue_bounce();
> > this means that blk_queue_split() and blk_recalc_rq_segments() don't need to
> > be concerned with bouncing affecting segment merging.
>
> > diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
> > index 51824d1f23..e4376b9613 100644
> > --- a/drivers/block/nvme-core.c
> > +++ b/drivers/block/nvme-core.c
> > @@ -737,6 +737,8 @@ static void nvme_make_request(struct request_queue *q, struct bio *bio)
> > struct nvme_queue *nvmeq = get_nvmeq(ns->dev);
> > int result = -EBUSY;
> >
> > + blk_queue_split(q, &bio, q->bio_split);
> > +
> > if (!nvmeq) {
> > put_nvmeq(NULL);
> > bio_endio(bio, -EIO);
>
> I'd suggest that we do:
>
> - struct nvme_queue *nvmeq = get_nvmeq(ns->dev);
> + struct nvme_queue *nvmeq;
> int result = -EBUSY;
>
> + blk_queue_split(q, &bio, q->bio_split);
> +
> + nvmeq = get_nvmeq(ns->dev);
> if (!nvmeq) {
>
> so that we're running the blk_queue_split() code outside the get_cpu()
> call.
>
> Now, the NVMe driver has its own rules about when BIOs have to be split.
> Right now, that's way down inside the nvme_map_bio() call when we're
> walking the bio to compose the scatterlist. Should we instead have an
> nvme_bio_split() routine that is called instead of blk_queue_split(),
> and we can simplify nvme_map_bio() since it'll know that it's working
> with bios that don't have to be split.
>
> In fact, I think it would have little NVMe-specific in it at that point,
> so we could name __blk_bios_map_sg() better, export it to drivers and
> call it from nvme_map_bio(), which I think would make everybody happier.
Actually, reading nvme_map_bio() (it's different since last I looked at
it) it looks like nvme should already be able to handle arbitrary size
bios?
I do intend to rework the blk_bio_map_sg() (or add a new one?) to
incrementally map as much of a bio as will fit in the provided
scatterlist, but it looks like nvme has some odd restrictions where it's
using BIOVEC_PHYS_MERGABLE()/BIOVEC_NOT_VIRT_MERGABLE() so I dunno if
it's worth bothering to try and have it use generic code.
However we don't need an explicit split here: if the sg fills up (i.e.
the places nvme_split_and_submit() is called), we can just mark the bio
as partially completed (set bio->bi_iter = iter, i.e. use the iterator
you passed to bio_for_each_segment), then increment bi_remaining (which
just counts completions, i.e. bio_endio() calls before the bio is really
completed) and resubmit the original bio. No need to allocate a split
bio, or loop over the bio again in bio_split().
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists