[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120910230643.GC19739@google.com>
Date: Mon, 10 Sep 2012 16:06:43 -0700
From: Kent Overstreet <koverstreet@...gle.com>
To: Tejun Heo <tj@...nel.org>
Cc: axboe@...nel.dk, device-mapper development <dm-devel@...hat.com>,
david@...morbit.com, linux-kernel@...r.kernel.org,
linux-bcache@...r.kernel.org,
Mikulas Patocka <mpatocka@...hat.com>, bharrosh@...asas.com,
Vivek Goyal <vgoyal@...hat.com>
Subject: Re: [dm-devel] [PATCH 2/2] block: Avoid deadlocks with bio
allocation by stacking drivers
On Mon, Sep 10, 2012 at 04:01:01PM -0700, Tejun Heo wrote:
> Hello,
>
> On Mon, Sep 10, 2012 at 3:50 PM, Alasdair G Kergon <agk@...hat.com> wrote:
> >> > Note that this doesn't do anything for allocation from other mempools.
> >
> > Note that dm has several cases of this, so this patch should not be used with
> > dm yet. Mikulas is studying those cases to see whether anything like this
> > might be feasible/sensible or not.
>
> IIUC, Kent posted a patch which converts all of them to use front-pad
> (there's no reason not to, really). This better come after that but
> it's not like this is gonna break something which isn't broken now.
Not all, I only did the easy one - you know how dm has all those crazy
abstraction layers? They've got multiple per bio allocations because of
that; the core dm code does one, and then some other code takes that
struct dm_io* and allocates its own state pointing to that (which then
points to the original bio...)
So front_pad should still work, but you need to have say dm_crypt pass
the amount of front pad it needs to the core dm code when it creates the
bio_set, and then dm crypt can use container_of(struct dm_io) and embed
like everything does that use the bio_set front pad.
*I'm probably misremembering all the names.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists