lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070423163247.GA30252@in.ibm.com>
Date:	Mon, 23 Apr 2007 22:02:47 +0530
From:	Suparna Bhattacharya <suparna@...ibm.com>
To:	Amit Gud <gud@....ksu.edu>
Cc:	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	val_henson@...ux.intel.com, riel@...riel.com, zab@...bo.net,
	arjan@...radead.org, brandon@...p.org, karunasagark@...il.com,
	gud@....edu
Subject: Re: [RFC][PATCH] ChunkFS: fs fission for faster fsck

On Mon, Apr 23, 2007 at 09:58:49PM +0530, Suparna Bhattacharya wrote:
> On Mon, Apr 23, 2007 at 06:21:34AM -0500, Amit Gud wrote:
> > 
> > This is an initial implementation of ChunkFS technique, briefly discussed
> > at: http://lwn.net/Articles/190222 and 
> > http://cis.ksu.edu/~gud/docs/chunkfs-hotdep-val-arjan-gud-zach.pdf
> > 
> > This implementation is done within ext2 driver. Every chunk is an 
> > independent ext2 file system. The knowledge about chunks is kept within 
> > ext2 and 'continuation inodes', which are used to allow files and 
> > directories span across multiple chunks, are managed within ext2.
> > 
> > At mount time, super blocks for all the chunks are created and linked with 
> > the global super_blocks list maintained by VFS. This allows independent 
> > behavior or individual chunks and also helps writebacks to happen 
> > seamlessly.
> > 
> > Apart from this, chunkfs code in ext2 effectively only provides knowledge 
> > of:
> > 
> > - what inode's which block number to look for, for a given file's logical 
> > block number
> > - in which chunk to allocate next inode / block
> > - number of inodes to scan when a directory is being read
> > 
> > To maintain the ext2's inode number uniqueness property, 8 msb bits of 
> > inode number are used to indicate the chunk number in which it resides.
> > 
> > As said, this is a preliminary implementation and lots of changes are 
> > expected before this code is even sanely usable. Some known issues and 
> > obvious optimizations are listed in the TODO file in the chunkfs patch.
> > 
> > http://cis.ksu.edu/~gud/patches/chunkfs-v0.0.8.patch
> > - one big patch
> > - applies to 2.6.18
> 
> 
> Could you send this out as a patch to ext2 codebase, so we can just look
> at the changes for chunkfs ? That might also make it small enough
> to inline your patch in email for review. 

Sorry, I missed the part about ext2-chunkfs-diff below.

Regards
suparna

> 
> What kind of results are you planning to gather to evaluate/optimize this ?
> 
> Regards
> Suparna
> 
> > 
> > Attached - ext2-chunkfs-diff.patch.gz
> > - since the code is a spin-off of ext2, this patch explains better what
> >   has changed from the ext2.
> > 
> > git://cislinux.cis.ksu.edu/chunkfs-tools
> > - mkfs, and fsck for chunkfs.
> > 
> > http://cis.ksu.edu/~gud/patches/config-chunkfs-2.6.18-uml
> > - config file used; tested mostly on UML with loopback file systems.
> > 
> > NOTE: No xattrs and xips yet, CONFIG_EXT2_FS_XATTR and CONFIG_EXT2_FS_XIP 
> > should be "no" for clean compile.
> > 
> > 
> > Please comment, suggest, criticize. Patches most welcome.
> > 
> > 
> > Best,
> > AG
> > --
> > May the source be with you.
> > http://www.cis.ksu.edu/~gud
> 
> 
> 
> -- 
> Suparna Bhattacharya (suparna@...ibm.com)
> Linux Technology Center
> IBM Software Lab, India
> 

-- 
Suparna Bhattacharya (suparna@...ibm.com)
Linux Technology Center
IBM Software Lab, India

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ