lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 22 Apr 2008 10:32:12 -0500
From:	"Jose R. Santos" <jrs@...ibm.com>
To:	Theodore Tso <tytso@....edu>
Cc:	linux-ext4@...r.kernel.org,
	Valerie Clement <valerie.clement@...l.net>
Subject: Re: [E2FSPROGS, RFC] mke2fs: New bitmap and inode table allocation
 for FLEX_BG

On Tue, 22 Apr 2008 10:51:25 -0400
Theodore Tso <tytso@....edu> wrote:

> On Tue, Apr 22, 2008 at 09:18:47AM -0500, Jose R. Santos wrote:
> > > @@ -1638,6 +1654,19 @@ static void PRS(int argc, char *argv[])
> > > 
> > >  	if (inode_size == 0)
> > >  		inode_size = get_int_from_profile(fs_types, "inode_size", 0);
> > > +	if (!flex_bg_size && (fs_param.s_feature_incompat &
> > > +			      EXT4_FEATURE_INCOMPAT_FLEX_BG))
> > > +		get_int_from_profile(fs_types, "flex_bg_size", 8);
> > 
> > A default of 256 block groups to pack seems a bit high base on some of
> > the performance testing that I've done.  At some point having the inodes
> > too far away from the data blocks begins to affect performance
> > (especially on read operations).  The optimum number of groups depends
> > a lot on platter density of the hard drive so I expect that we can
> > increase the default grouping size as time goes by.  Using 128 groups
> > as already showing performance degradation on read operations on some
> > of my smaller disks (147GB).  For now, I would change this to 6 (64
> > groups) as this is a good balance for both big an small disks.
> 
> Actually this is 8 (as in 2**3), which was intentionally very small,
> because I was being conservative.  I could change it to be 64 if you
> think it is a better balance.  As you can see, it gets set later on
> down here.

I see that now, guess I should not read code with out having
breakfast.  I think 8 is a very safe and conservative number, maybe to
conservative. The 64 group packing was the number I found to be a
overall improvement with the limited number of drives that I had to
test with.  Haven't done any testing on old drives or laptop drive with
slow spindle speed but I would think 16 or 32 would be safe here unless
the drive is really old and small.

> 
> > > +		fs_param.s_log_groups_per_flex = int_log2(flex_bg_size);
> 
> And, in fact the biggest bug which both you and I missed was that this:
> 
> > > +		get_int_from_profile(fs_types, "flex_bg_size", 8);
> 
> Should have been this:
> 
>        	    flex_bg_size = get_int_from_profile(fs_types, "flex_bg_size", 8);
> 
> <Dons paper bag>
> 
> 						- Ted



-JRS
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ