[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E738B81.2070005@vflare.org>
Date: Fri, 16 Sep 2011 13:46:41 -0400
From: Nitin Gupta <ngupta@...are.org>
To: Seth Jennings <sjenning@...ux.vnet.ibm.com>
CC: Greg KH <greg@...ah.com>, gregkh@...e.de,
devel@...verdev.osuosl.org, dan.magenheimer@...cle.com,
cascardo@...oscopio.com, linux-kernel@...r.kernel.org,
dave@...ux.vnet.ibm.com, linux-mm@...ck.org,
brking@...ux.vnet.ibm.com, rcj@...ux.vnet.ibm.com
Subject: Re: [PATCH v2 0/3] staging: zcache: xcfmalloc support
Hi Seth,
On 09/15/2011 12:31 PM, Seth Jennings wrote:
>
> So this is how I see things...
>
> Right now xvmalloc is broken for zcache's application because
> of its huge fragmentation for half the valid allocation sizes
> (> PAGE_SIZE/2).
>
> My xcfmalloc patches are _a_ solution that is ready now. Sure,
> it doesn't so compaction yet, and it has some metadata overhead.
> So it's not "ideal" (if there is such I thing). But it does fix
> the brokenness of xvmalloc for zcache's application.
>
> So I see two ways going forward:
>
> 1) We review and integrate xcfmalloc now. Then, when you are
> done with your allocator, we can run them side by side and see
> which is better by numbers. If yours is better, you'll get no
> argument from me and we can replace xcfmalloc with yours.
>
> 2) We can agree on a date (sooner rather than later) by which your
> allocator will be completed. At that time we can compare them and
> integrate the best one by the numbers.
>
I think replacing allocator every few weeks isn't a good idea. So, I
guess better would be to let me work for about 2 weeks and try the slab
based approach. If nothing works out in this time, then maybe xcfmalloc
can be integrated after further testing.
Thanks,
Nitin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists