[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1304979845.4865.58.camel@mulgrave.site>
Date: Mon, 09 May 2011 17:24:04 -0500
From: James Bottomley <James.Bottomley@...e.de>
To: David Rientjes <rientjes@...gle.com>
Cc: Geert Uytterhoeven <geert@...ux-m68k.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Pekka Enberg <penberg@...helsinki.fi>,
Christoph Lameter <cl@...ux.com>, linux-m68k@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [git pull] m68k SLUB fix for 2.6.39
On Wed, 2011-05-04 at 12:07 -0700, David Rientjes wrote:
> On Wed, 4 May 2011, James Bottomley wrote:
>
> > Yes, but I also encountered it after I applied you patch, which is why I
> > still pushed the Kconfig patch. It's possible, since there were a huge
> > number of patches flying around that the kernel base was contaminated,
> > so I'll strip down to just linus HEAD + parisc coherence patches,
> > reverting the Kconfig one and try again.
> >
>
> Great, and if that works out successfully this time around I think we'll
> either need to fix each individual arch Kconfig that we know doesn't work
> well (at least parisc because of the scheduling issue) so that it at least
> enables CONFIG_NUMA implicitly for discontigmem unless CONFIG_BROKEN is
> set.
OK, I confirm that the N_NORMAL_MEMORY patch on its own fixes slub for
us. We can revert the mark slub BROKEN in DISCONTIGMEM && !NUMA patch.
> The ideal solution is probably to rely on CONFIG_NEED_MULTIPLE_NODES
> rather than CONFIG_NUMA, which is why it was introduced in the first place
> since it was duplicating data structures for both NUMA and discontigmem.
> That's apparently broken somewhere in the kernel that turned your SMP box
> into an UP.
Sure ... either that or accelerate a conversion to something like
SPARSEMEM.
James
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists