[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6934efce0808212027q412c4cbbp6ea8673a7d3bc1b9@mail.gmail.com>
Date: Thu, 21 Aug 2008 20:27:50 -0700
From: "Jared Hulbert" <jaredeh@...il.com>
To: "Phillip Lougher" <phillip@...gher.demon.co.uk>
Cc: Linux-kernel@...r.kernel.org, linux-embedded@...r.kernel.org,
linux-mtd <linux-mtd@...ts.infradead.org>,
"Jörn Engel" <joern@...fs.org>,
tim.bird@...sony.com, cotte@...ibm.com, nickpiggin@...oo.com.au
Subject: Re: [PATCH 04/10] AXFS: axfs_inode.c
> I assume compressed blocks can be larger than PAGE_CACHE_SIZE? This suffers
> from the rather obvious inefficiency that you decompress a big block >
> PAGE_CACHE_SIZE, but only copy one PAGE_CACHE_SIZE page out of it. If
> multiple files are being read simultaneously (a common occurrence), then
> each is going to replace your one cached uncompressed block
> (sbi->current_cnode_index), leading to decompressing the same blocks over
> and over again on sequential file access.
>
> readpage file A, index 1 -> decompress block X
> readpage file B, index 1 -> decompress block Y (replaces X)
> readpage file A, index 2 -> repeated decompress of block X (replaces Y)
> readpage file B, index 2 -> repeated decompress of block Y (replaces X)
>
> and so on.
Yep. Been thinking about optimizing it. So far it hasn't been an
issue for my customers. Most fs traffic being on the XIP pages. Once
I get a good automated performance test up we'll probably look into
something to improve this.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists