[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cone.1171759517.348623.14045.501@kolivas.org>
Date: Sun, 18 Feb 2007 11:45:17 +1100
From: Con Kolivas <kernel@...ivas.org>
To: Radoslaw Szkodzinski <astralstorm@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Chuck Ebbert <cebbert@...hat.com>,
michael chang <thenewme91@...il.com>,
ck mailing list <ck@....kolivas.org>,
linux kernel mailing list <linux-kernel@...r.kernel.org>
Subject: Re: 2.6.20-ck1
Radoslaw Szkodzinski writes:
> On 2/18/07, Andrew Morton <akpm@...ux-foundation.org> wrote:
>> Generally, the penalties for getting this stuff wrong are very very high:
>> orders of magnitude slowdowns in the right situations. Which I suspect
>> will make any system-wide knob ultimately unsuccessful.
>>
>
> Yes, they were. Now, it's an extremely light and well-tuned patch.
> kprefetchd should only run on a totally idle system now.
>
>> The ideal way of getting this *right* is to change every application in the
>> world to get smart about using sync_page_range() and/or posix_fadvise(),
>> then to add a set of command-line options to each application in the world
>> so the user can control its pagecache handling.
>
> We don't live in a perfect world. :-)
>
>> Obviously that isn't practical. But what _could_ be done is to put these
>> pagecache smarts into glibc's read() and write() code. So the user can do:
>>
>> MAX_PAGECACHE=4M MAX_DIRTY_PAGECACHE=2M rsync foo bar
>>
>> This will provide pagecache control for pretty much every application. It
>> has limitations (fork+exec behaviour??) but will be useful.
>
> Not too useful for interactive applications with unpredictable memory
> consumption behaviour, where swap-prefetch still helps.
Hey Radoslaw, your points are valid but Andrew was referring to the tail
large files patch in this email.
--
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists