[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45A6F6C2.80905@yahoo.com.au>
Date: Fri, 12 Jan 2007 13:47:30 +1100
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Aubrey <aubreylee@...il.com>
CC: Roy Huang <royhuang9@...il.com>, Andrew Morton <akpm@...l.org>,
Linus Torvalds <torvalds@...l.org>,
Hua Zhong <hzhong@...il.com>, Hugh Dickins <hugh@...itas.com>,
linux-kernel@...r.kernel.org, hch@...radead.org,
kenneth.w.chen@...el.com, mjt@....msk.ru,
Robin Getz <rgetz@...ckfin.uclinux.org>
Subject: Re: O_DIRECT question
Aubrey wrote:
> On 1/11/07, Roy Huang <royhuang9@...il.com> wrote:
>
>> On a embedded systerm, limiting page cache can relieve memory
>> fragmentation. There is a patch against 2.6.19, which limit every
>> opened file page cache and total pagecache. When the limit reach, it
>> will release the page cache overrun the limit.
>
>
> The patch seems to work for me. But some suggestions in my mind:
>
> 1) Can we limit the total page cache, not the page cache per each file?
> think about if total memory is 128M, 10% of it is 12.8M, here if
> one application is running, it can use 12.8M vfs cache, then the
> performance will probably not be impacted. However, the current patch
> limit the page cache per each file, which means if only one
> application runs it can only use CONFIG_PAGE_LIMIT pages cache. It may
> be small to the application.
> ------------------snip---------------
> if (mapping->nrpages >= mapping->pages_limit)
> balance_cache(mapping);
> ------------------snip---------------
>
> 2) A percent number should be better to control the value. Can we add
> a proc interface to make the value tunable?
Even a global value isn't completely straightforward, and a per-file value
would be yet more work.
You see, it is hard to do any sort of directed reclaim at these pages.
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists