[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0704021428340.2272@schroedinger.engr.sgi.com>
Date: Mon, 2 Apr 2007 14:31:37 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Dave Hansen <hansendc@...ibm.com>
cc: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
Martin Bligh <mbligh@...gle.com>, linux-mm@...ck.org,
Andi Kleen <ak@...e.de>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: [PATCH 1/2] Generic Virtual Memmap suport for SPARSEMEM
On Mon, 2 Apr 2007, Dave Hansen wrote:
> > > Hmmmmmmm. Can we combine this with sparse_index_alloc()? Also, why not
> > > just use the slab for this?
> >
> > Use a slab for page sized allocations? No.
>
> Why not? We use it above for sparse_index_alloc() and if it is doing
> something wrong, I'd love to fix it. Can you elaborate?
The slab allocator purposes is to deliver small sub page sized chunks.
The page allocator is there to allocate pages. Both are optimized for its
purpose.
> > I just extended this in V2 to also work on IA64. Its pretty generic.
>
> Can you extend it to work on ppc? ;)
I do not know enough about how ppc handles large pages.
> You haven't posted V2, right?
No tests are still running.
> > > Then, do whatever magic you want in alloc_vmemmap().
> >
> > That would break if alloc_vmemmap returns NULL because it cannot allocate
> > memory.
>
> OK, that makes sense. However, it would still be nice to hide that
> #ifdef somewhere that people are a bit less likely to run into it. It's
> just one #ifdef, so if you can kill it, great. Otherwise, they pile up
> over time and _do_ cause real readability problems.
Well think about how to handle the case that the allocatiopn of a page
table page or a vmemmap block fails. Once we have that sorted out then we
can cleanup the higher layers.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists