[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090315232614.GA27452@silver.sucs.org>
Date: Sun, 15 Mar 2009 23:26:14 +0000
From: Sitsofe Wheeler <sitsofe@...oo.com>
To: Alexey Fisher <bug-track@...her-privat.net>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: smart cache. ist is possible?
On Sun, Mar 15, 2009 at 11:06:34PM +0100, Alexey Fisher wrote:
>
> It is not what i mean. I know how to clear cache but exactly i do not
> won't it. I will use cache and it's working perfectly with small files.
I meant for timings on the small files otherwise how do you know which
exactly pages were floating around the cache?
> But there is a problem with big files. For example i have 4GB RAM, if i
> read 4,6GB file the cache is useless. The question is; are there any way
> to workaround it, except more RAM?
I suspect what is happening is that you are cycling the cache. Because
you can't hold everything and you are reading the file sequentially you
will successfully have cleared the cache of the start of the file by the
time you start again (so first bit gets evicted by the time last bit is
read etc). If you use dd bs=1000M count=1 I think you will find that the
kernel CAN cache pieces of files but as pointed out elsewhere, without
knowing the future what do you decide to keep when your cache is full?
At a guess you either you need to provide a hint (e.g. bybassing the
cache for some of the file so it doesn't become full or locking specific
pages into RAM) or create a bigger cache somehow (e.g. by buying more
RAM).
--
Sitsofe | http://sucs.org/~sits/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists