[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <200807141448.22233.nickpiggin@yahoo.com.au>
Date: Mon, 14 Jul 2008 14:48:21 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Jon Tollefson <kniht@...ux.vnet.ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Alexey Dobriyan <adobriyan@...il.com>, penberg@...helsinki.fi,
mpm@...enic.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
cl@...ux-foundation.org
Subject: Re: SL*B: drop kmem cache argument from constructor
On Saturday 12 July 2008 07:40, Jon Tollefson wrote:
> Andrew Morton wrote:
> > btw, Nick, what's with that dopey
> >
> > huge_pgtable_cache(psize) = kmem_cache_create(...
> >
> > trick? The result of a function call is not an lvalue, and writing a
> > macro which pretends to be a function and then using it in some manner
> > in which a function cannot be used is seven ways silly :(
I agree it isn't nice.
> That silliness came from me.
> It came from my simplistic translation of the existing code to handle
> multiple huge page sizes. I would agree it would be easier to read and
> more straight forward to just have the indexed array directly on the
> left side instead of a macro. I can send out a patch that makes that
> change if desired.
> Something such as
>
> +#define HUGE_PGTABLE_INDEX(psize) (HUGEPTE_CACHE_NUM + psize - 1)
>
> -huge_pgtable_cache(psize) = kmem_cache_create(...
> +pgtable_cache[HUGE_PGTABLE_INDEX(psize)] = kmem_cache_create(...
>
>
> or if there is a more accepted way of handling this situation I can
> amend it differently.
If it is a once off initialization (which it is), that's probably fine
like that. Otherwise, the convention is to have a set_huge_pgtable_cache
function as well. But whatever you prefer. Yes if you can send a patch,
that would be good, thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists