lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070516210206.GH26766@think.oraclecorp.com>
Date:	Wed, 16 May 2007 17:02:06 -0400
From:	Chris Mason <chris.mason@...cle.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Chuck Ebbert <cebbert@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: filesystem benchmarking fun

On Wed, May 16, 2007 at 01:37:26PM -0700, Andrew Morton wrote:
> On Wed, 16 May 2007 16:14:14 -0400
> Chris Mason <chris.mason@...cle.com> wrote:
> 
> > On Wed, May 16, 2007 at 01:04:13PM -0700, Andrew Morton wrote:
> > > > The good news is that if you let it run long enough, the times
> > > > stabilize.  The bad news is:
> > > > 
> > > > create dir kernel-86 222MB in 15.85 seconds (14.03 MB/s)
> > > > create dir kernel-87 222MB in 28.67 seconds (7.76 MB/s)
> > > > create dir kernel-88 222MB in 18.12 seconds (12.27 MB/s)
> > > > create dir kernel-89 222MB in 19.77 seconds (11.25 MB/s)
> > > 
> > > well hang on.  Doesn't this just mean that the first few runs were writing
> > > into pagecache and the later ones were blocking due to dirty-memory limits?
> > > 
> > > Or do you have a sync in there?
> > > 
> > There's no sync,  but if you watch vmstat you can clearly see the log
> > flushes, even when the overall create times are 11MB/s.  vmstat goes
> > 30MB/s -> 4MB/s or less, then back up to 30MB/s.
> 
> How do you know that it is a log flush rather than, say, pdflush
> hitting the blockdev inode and doing a big seeky write?

I don't...it gets especially tricky because ext3_writepage starts
a transaction, and so pdflush does hit the log flushing code too.

So, in comes systemtap.  I instrumented submit_bh to look for seeks
(defined as writes more than 16 blocks apart) when the process was
inside __log_wait_for_space.  The probe is attached, it is _really_
quick and dirty because I'm about to run out the door.

Watching vmstat, every time the __log_wait_for_space hits lots of seeks,
vmstat goes into the 2-4MB/s range.  Not a scientific match up, but
here's some sample output:

7824 ext3 done waiting for space total wrote 3155 blocks seeks 2241
7827 ext3 done waiting for space total wrote 855 blocks seeks 598
7827 ext3 done waiting for space total wrote 2547 blocks seeks 1759
7653 ext3 done waiting for space total wrote 2273 blocks seeks 1609

I also recorded the total size of each seek, 66% of them where 6000
blocks or more.

-chris


View attachment "jbd.tap" of type "text/plain" (1050 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ