[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090504161503.GC8728@soda.linbit>
Date: Mon, 4 May 2009 18:15:03 +0200
From: Lars Ellenberg <lars.ellenberg@...bit.com>
To: Rik van Riel <riel@...hat.com>
Cc: Kyle Moffett <kyle@...fetthome.net>,
Philipp Reisner <philipp.reisner@...bit.com>,
linux-kernel@...r.kernel.org, Jens Axboe <jens.axboe@...cle.com>,
Greg KH <gregkh@...e.de>, Neil Brown <neilb@...e.de>,
James Bottomley <James.Bottomley@...senpartnership.com>,
Sam Ravnborg <sam@...nborg.org>, Dave Jones <davej@...hat.com>,
Nikanth Karthikesan <knikanth@...e.de>,
Lars Marowsky-Bree <lmb@...e.de>,
"Nicholas A. Bellinger" <nab@...ux-iscsi.org>,
Bart Van Assche <bart.vanassche@...il.com>
Subject: Re: [PATCH 02/16] DRBD: lru_cache
On Mon, May 04, 2009 at 12:12:07PM -0400, Rik van Riel wrote:
> Kyle Moffett wrote:
>> On Sun, May 3, 2009 at 8:48 PM, Kyle Moffett <kyle@...fetthome.net> wrote:
>>> There are a couple trivial tunables you can apply to the model I
>>> provided to dramatically change the effect of memory pressure on the
>>> LRU:
>>>
>>> [...]
>>>
>>
>> Ooh, I forgot to mention another biggie: There's a way to allocate a
>> reserve pool of memory (I don't remember the exact API, sorry), which
>> can be attached to a specific kmem_cache to be used by processes
>> attempting writeout. This would allow you to allocate more in-use
>> elements to make forward progress, even if all of your existing
>> elements are already in-use.
>
> Lars,
>
> is using a mempool for allocation, in combination with a
> shrinker callback for freeing older entries an option for
> DRBD?
>
> It looks like that could get rid of a fair amount of custom infrastructure.
I'm going to look into it.
Lars
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists