[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1178656627.5203.84.camel@localhost>
Date: Tue, 08 May 2007 16:37:06 -0400
From: Lee Schermerhorn <Lee.Schermerhorn@...com>
To: Christoph Lameter <clameter@....com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, ak@...e.de, jbarnes@...tuousgeek.org
Subject: Re: [PATCH] change zonelist order v5 [1/3] implements zonelist
order selection
On Tue, 2007-05-08 at 11:05 -0700, Christoph Lameter wrote:
> On Tue, 8 May 2007, Lee Schermerhorn wrote:
>
> > > So far testing is IA64 only?
> > Yes, so far. I will test on an Opteron platform this pm.
> > Assume that no news is good news.
>
> A better assumption: no news -> no testing.
Before you asked, yes. I meant after the last message, if you didn't
hear from me, everything worked fine. And it does, sort of...
> You probably need a
> configuration with a couple of nodes. Maybesomething less symmetric than
> Kame? I.e. have 4GB nodes and then DMA32 takes out a sizeable chunk of it?
>
I tested on a 2 socket, 4GB Opteron blade. All memory is either DMA32
or DMA. I added some ad hoc instrumentation to the build_zonelist_*
functions to see what's happening. I have verified that the patches
appear to build the zonelists correctly:
default -> node order, because "low_kmem" [DMA+DMA32] > total_mem/2.
Zone lists:
DMA: DMA-0
DMA32: DMA32-0, DMA-0, DMA32-1
Normal: same as DMA32 [no normal memory]
Movable: same as DMA32 & Normal
explicit zone order also builds as expected:
DMA: DMA-0
DMA32: DMA32-1, DMA32-0, DMA-0
and same for normal and movable
However, a curious thing happens: in either order, allocations seem to
overflow to the remote DMA32 before dipping into the DMA!!!? I'm using
memtoy to create a large [3+GB] anon segment and locking it down.
I need to check a non-patched kernel to see if it behaves the same way,
and examine the code to see why... For one thing, the kernel seems to
do a bit better at reclaiming memory before overflowing. Eventually, it
will dip into DMA and finally get killed--OOM.
I'll be off-line most of the rest of the week, so I probably won't get
to investigate much further nor test on a larger socket count/memory
system until next week.
Lee
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists