[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48AE36A7.4060000@lougher.demon.co.uk>
Date: Fri, 22 Aug 2008 04:46:47 +0100
From: Phillip Lougher <phillip@...gher.demon.co.uk>
To: Jared Hulbert <jaredeh@...il.com>
CC: Linux-kernel@...r.kernel.org, linux-embedded@...r.kernel.org,
linux-mtd <linux-mtd@...ts.infradead.org>,
Jörn Engel <joern@...fs.org>,
tim.bird@...sony.com, cotte@...ibm.com, nickpiggin@...oo.com.au
Subject: Re: [PATCH 04/10] AXFS: axfs_inode.c
Jared Hulbert wrote:
>> I assume compressed blocks can be larger than PAGE_CACHE_SIZE? This suffers
>> from the rather obvious inefficiency that you decompress a big block >
>> PAGE_CACHE_SIZE, but only copy one PAGE_CACHE_SIZE page out of it. If
>> multiple files are being read simultaneously (a common occurrence), then
>> each is going to replace your one cached uncompressed block
>> (sbi->current_cnode_index), leading to decompressing the same blocks over
>> and over again on sequential file access.
>>
>> readpage file A, index 1 -> decompress block X
>> readpage file B, index 1 -> decompress block Y (replaces X)
>> readpage file A, index 2 -> repeated decompress of block X (replaces Y)
>> readpage file B, index 2 -> repeated decompress of block Y (replaces X)
>>
>> and so on.
>
> Yep. Been thinking about optimizing it. So far it hasn't been an
> issue for my customers. Most fs traffic being on the XIP pages. Once
> I get a good automated performance test up we'll probably look into
> something to improve this.
It's relatively easy to solve. Squashfs explicitly pushes the extra
pages into the pagecache (so subsequent readpages find them there and
don't call readpage on squashfs again).
Phillip
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists