[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090414184038.GJ5178@kernel.dk>
Date: Tue, 14 Apr 2009 20:40:38 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: Theodore Tso <tytso@....edu>
Cc: Nikanth Karthikesan <knikanth@...e.de>, Neil Brown <neilb@...e.de>,
linux-kernel@...r.kernel.org, Chris Mason <chris.mason@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Kleikamp <shaggy@...tin.ibm.com>, xfs-masters@....sgi.com
Subject: Re: [PATCH 0/6] Handle bio_alloc failure
On Tue, Apr 14 2009, Theodore Tso wrote:
> On Tue, Apr 14, 2009 at 08:20:49PM +0200, Jens Axboe wrote:
> >
> > It's a bio_alloc() guarantee, it uses a mempool backing. And if you use
> > a mempool backing, any allocation that can wait will always be
> > satisfied.
> >
>
> Am I missing something? I don't see anything in
> include/linux/mempool.h or mm/mempool.c, or in block/blk-core.c or
> include/linux/bio.h which documents that GFP_WAIT implies that
> bio_alloc() will always succeed.
Read mempool.c:mempool_alloc(). If __GFP_WAIT is set, it'll never turn
without having done the allocation. It's a bit weird that it isn't
documented in bio_alloc() itself, but there are several other places in
bio.c that references the fact that it cannot fail.
> My concern is that at some point in the future, someone either in the
> block device layer or in mm/mempool.c will consider this an
> implementation detail, and all of sudden calls to bio_alloc() with
> GFP_WAIT will start failing and the resulting hilarty which ensues
> won't be easily predicted by the developer making this change.
It's the entire premise of a mempool, so trust me, it'll never go away.
It is the reason they were added in the first place, for eg swap you
need the mempool guarantee or you risk deadlocking.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists