[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110720142018.GL5349@suse.de>
Date: Wed, 20 Jul 2011 15:20:18 +0100
From: Mel Gorman <mgorman@...e.de>
To: Christoph Lameter <cl@...ux.com>
Cc: Pekka Enberg <penberg@...nel.org>,
Konstantin Khlebnikov <khlebnikov@...allels.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Matt Mackall <mpm@...enic.com>
Subject: Re: [PATCH] mm-slab: allocate kmem_cache with __GFP_REPEAT
On Wed, Jul 20, 2011 at 08:54:10AM -0500, Christoph Lameter wrote:
> On Wed, 20 Jul 2011, Pekka Enberg wrote:
>
> > On Wed, 20 Jul 2011, Konstantin Khlebnikov wrote:
> > > > The changelog isn't that convincing, really. This is kmem_cache_create()
> > > > so I'm surprised we'd ever get NULL here in practice. Does this fix some
> > > > problem you're seeing? If this is really an issue, I'd blame the page
> > > > allocator as GFP_KERNEL should just work.
> > >
> > > nf_conntrack creates separate slab-cache for each net-namespace,
> > > this patch of course not eliminates the chance of failure, but makes it more
> > > acceptable.
> >
> > I'm still surprised you are seeing failures. mm/slab.c hasn't changed
> > significantly in a long time. Why hasn't anyone reported this before? I'd
> > still be inclined to shift the blame to the page allocator... Mel, Christoph?
>
> There was a lot of recent fiddling with the reclaim logic. Maybe some of
> those changes caused the problem?
>
It's more likely that creating new slabs while under memory pressure
significant enough to fail an order-4 allocation is a situation that is
rarely tested.
What kernel version did this failure occur on? What was the system doing
at the time of failure? Can the page allocation failure message be
posted?
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists