[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f44c5fdf0702171641l43fb62d9h73df322634ae26d6@mail.gmail.com>
Date: Sun, 18 Feb 2007 01:41:07 +0100
From: "Radoslaw Szkodzinski" <astralstorm@...il.com>
To: "Andrew Morton" <akpm@...ux-foundation.org>
Cc: "Con Kolivas" <kernel@...ivas.org>,
"Chuck Ebbert" <cebbert@...hat.com>,
"michael chang" <thenewme91@...il.com>,
"ck mailing list" <ck@....kolivas.org>,
"linux kernel mailing list" <linux-kernel@...r.kernel.org>
Subject: Re: [ck] Re: 2.6.20-ck1
On 2/18/07, Andrew Morton <akpm@...ux-foundation.org> wrote:
> Generally, the penalties for getting this stuff wrong are very very high:
> orders of magnitude slowdowns in the right situations. Which I suspect
> will make any system-wide knob ultimately unsuccessful.
>
Yes, they were. Now, it's an extremely light and well-tuned patch.
kprefetchd should only run on a totally idle system now.
> The ideal way of getting this *right* is to change every application in the
> world to get smart about using sync_page_range() and/or posix_fadvise(),
> then to add a set of command-line options to each application in the world
> so the user can control its pagecache handling.
We don't live in a perfect world. :-)
> Obviously that isn't practical. But what _could_ be done is to put these
> pagecache smarts into glibc's read() and write() code. So the user can do:
>
> MAX_PAGECACHE=4M MAX_DIRTY_PAGECACHE=2M rsync foo bar
>
> This will provide pagecache control for pretty much every application. It
> has limitations (fork+exec behaviour??) but will be useful.
Not too useful for interactive applications with unpredictable memory
consumption behaviour, where swap-prefetch still helps.
> A kernel-based solution might use new rlimits, but would not be as flexible
> or successful as a libc-based one, I suspect.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists