[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200808211417.14425.arnd@arndb.de>
Date: Thu, 21 Aug 2008 14:17:13 +0200
From: Arnd Bergmann <arnd@...db.de>
To: jaredeh@...il.com
Cc: Linux-kernel@...r.kernel.org, linux-embedded@...r.kernel.org,
linux-mtd <linux-mtd@...ts.infradead.org>,
Jörn Engel <joern@...fs.org>,
tim.bird@...sony.com, cotte@...ibm.com, nickpiggin@...oo.com.au
Subject: Re: [PATCH 04/10] AXFS: axfs_inode.c
On Thursday 21 August 2008, Jared Hulbert wrote:
> + array_index = AXFS_GET_INODE_ARRAY_INDEX(sbi, ino_number);
> + array_index += page->index;
> +
> + node_index = AXFS_GET_NODE_INDEX(sbi, array_index);
> + node_type = AXFS_GET_NODE_TYPE(sbi, array_index);
> +
> + if (node_type == Compressed) {
> + /* node is in compessed region */
> + cnode_offset = AXFS_GET_CNODE_OFFSET(sbi, node_index);
> + cnode_index = AXFS_GET_CNODE_INDEX(sbi, node_index);
> + down_write(&sbi->lock);
> + if (cnode_index != sbi->current_cnode_index) {
> + /* uncompress only necessary if different cblock */
> + ofs = AXFS_GET_CBLOCK_OFFSET(sbi, cnode_index);
> + len = AXFS_GET_CBLOCK_OFFSET(sbi, cnode_index + 1);
> + len -= ofs;
> + axfs_copy_data(sb, cblk1, &(sbi->compressed), ofs, len);
> + axfs_uncompress_block(cblk0, cblk_size, cblk1, len);
> + sbi->current_cnode_index = cnode_index;
> + }
> + downgrade_write(&sbi->lock);
> + max_len = cblk_size - cnode_offset;
> + len = max_len > PAGE_CACHE_SIZE ? PAGE_CACHE_SIZE : max_len;
> + src = (void *)((unsigned long)cblk0 + cnode_offset);
> + memcpy(pgdata, src, len);
> + up_read(&sbi->lock);
This looks very nice, but could use some comments about how the data is
actually stored on disk. It took me some time to figure out that it actually
allows to do tail merging into compressed blocks, which I was about to suggest
you implement ;-). Cramfs doesn't have them, and I found that they are the
main reason why squashfs compresses better than cramfs, besides the default
block size, which you can change on either one.
Have you seen any benefit of the rwsem over a simple mutex? I would guess
that you can never even get into the situation where you get concurrent
readers since I haven't found a single down_read() in your code, only
downgrade_write().
Arnd <><
Powered by blists - more mailing lists