lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080924203559.GK9929@mit.edu>
Date:	Wed, 24 Sep 2008 16:35:59 -0400
From:	Theodore Tso <tytso@....edu>
To:	Ric Wheeler <rwheeler@...hat.com>,
	Chris Mason <chris.mason@...cle.com>
Cc:	Andreas Dilger <adilger@....com>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH, RFC] ext4: Use preallocation when reading from the
	inode table

On Wed, Sep 24, 2008 at 09:23:34AM -0400, Ric Wheeler wrote:
>
> That sounds about right for modern S-ATA/SAS drives. I would expect that  
> having this be a tunable knob might help for some types of storage (SSD  
> might not care, but should be faster in any case?).
>

Well, for SSD's, wouldn't seek time not matter, but then the limiting
factor should be the overhead of the read transaction, and the I/O
bandwidth of the SSD.  So prefetching too much might hurt even more
for SSD's, although in comparison with spinning rust platters, it
would probably still be faster.  :-)

So I'm pretty sure that with an SSD we'll want to turn the tunable
down, not up.

On Wed, Sep 24, 2008 at 10:20:34AM -0400, Chris Mason wrote:
>For the test runs being done here, there's a pretty high chance that all
>of the inodes you read ahead will get used before the pages are dropped,
>so we want to find a balance between those and the worst case workloads
>where inode reads are basically random.  

Yep, agreed.

On the other hand, if we take your iop/s and translate them to
milliseconds so we can measure the latency in the case where the
workload is essentialy doing random reads, and then cross correlated
it with my measurements, we get this table:

i/o size iops/s  ms latency  % degredation         % improvement
    	 	    	     of random inodes   of related inodes I/O
   4k	  131       7.634      
   8k	  130	    7.692	0.77%		    11.3%
  16k	  128	    7.813	2.34%		    21.8%
  32k	  126	    7.937	3.97%		    29.6%
  64k	  121	    8.264	8.26%		    35.5%
 128k	  113	    8.850      15.93%		    40.0%
 256k	  100	   10.000      31.00%		    42.4%

Depending on whether you believe that workloads involving random inode
reads are more common compared to related inodes I/O, the sweet spot
is probably somewhere between 32k and 128k.  I'm open to opinions
(preferably backed up with more benchmarks of likely workloads) of
whether we should use a default value of inode_readahead_bits of 4 or
5 (i.e., 64k, my original guess, or 128k, in v2 of the patch).  But
yes, making it tunable is definitely going to be necessary, since for
different workloads (i.e squid vs. git repositories) will have very
different requirements.

The other thought are the one which comes to mind is whether we should
use a similar large readahead if all we are doing writes vs. reads.
For example, if we are just updating a single inode, and we are
reading a block to do a read/modify write cycle, maybe we shouldn't be
doing as much readahead.

						- Ted

P.S.  One caveat is that my numbers were taken from a Laptop SATA, and
if Chris's were taken from a Desktop/Sever SATA drive the numbers
might not be properly comparable.
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ