[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0804081333180.30874@schroedinger.engr.sgi.com>
Date: Tue, 8 Apr 2008 13:43:16 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Hugh Dickins <hugh@...itas.com>
cc: James Bottomley <James.Bottomley@...senPartnership.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
Jens Axboe <jens.axboe@...cle.com>,
Pekka Enberg <penberg@...helsinki.fi>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"Rafael J. Wysocki" <rjw@...k.pl>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] scsi: fix sense_slab/bio swapping livelock
On Mon, 7 Apr 2008, Hugh Dickins wrote:
> > Looking at mempool_alloc: Mempools may be used to do atomic allocations
> > until they fail thereby exhausting reserves and available object in the
> > partial lists of slab caches?
>
> Mempools may be used for atomic allocations, but I think that's not
> the case here. swap_writepage's get_swap_bio says GFP_NOIO, which
> allows (indeed is) __GFP_WAIT, and does not give access to __GFP_HIGH
> reserves.
Looks like that one of the issues here is that swap_writepage()
does not perform enough reclaim? If it would free more pages then
__scsi_get_command would still have pages to allocate and not drain
the reserves.
> Maybe PF_MEMALLOC and __GFP_NOMEMALLOC complicate the situation:
> I've given little thought to mempool_alloc's fiddling with the
> gfp_mask (beyond repeatedly misreading it).
Mempool_alloc()s use of the gfp_mask here suggests that it can potentially
drain all reserves and exhaust all available "slots" (partial slabs). Thus
it may regularly force any other user of the slab to hit the slow path
and potentially trigger reclaim. Could be a bit unfair.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists