[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C610184.7070909@kernel.org>
Date: Tue, 10 Aug 2010 10:36:36 +0300
From: Pekka Enberg <penberg@...nel.org>
To: ngupta@...are.org, jaxboe@...ionio.com
CC: Pekka Enberg <penberg@...helsinki.fi>,
Minchan Kim <minchan.kim@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Greg KH <greg@...ah.com>,
Linux Driver Project <devel@...verdev.osuosl.org>,
linux-mm <linux-mm@...ck.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 04/10] Use percpu buffers
Hi Nitin,
On 8/10/10 8:32 AM, Nitin Gupta wrote:
> Other block drivers are allocating relatively small structs using
>>> mempool_alloc(). However, in case of zram, these buffers are quite
>>> large (compress_workmem is 64K!). So, allocating them on every write
>>> would probably be much slower than using a pre-allocated per-cpu buffer.
>>>
>> The mempool API is precisely for that - using pre-allocated buffers instead of allocating every time. The preempt_disable() games make the code complex and have the downside of higher scheduling latencies so why not give mempools a try?
>>
> mempool_alloc() first calls alloc_fn with ~(__GFP_WAIT | __GFP_IO)
> and *then* falls down to pre-allocated buffers. So, it will always
> be slower than directly using pre-allocated buffers as is done
> currently.
>
> One trick we can use is to have alloc_fn such that it always returns
> failure with ~__GFP_WAIT and do actual allocation otherwise. But still
> it seems like unnecessary cost.
>
We can always extend the mempool API with mempool_prealloc() function if
that turns out to be a problem. The per-CPU buffer with
preempt_disable() trickery isn't really the proper thing to do here. It
doesn't make much sense to disable preemption for compression that's
purely CPU bound.
Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists