lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110420220626.GL29872@redhat.com>
Date:	Wed, 20 Apr 2011 18:06:26 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Chris Mason <chris.mason@...cle.com>
Cc:	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	linux-ext4 <linux-ext4@...r.kernel.org>, xfs@....sgi.com,
	jack@...e.cz, axboe@...nel.dk
Subject: Re: buffered writeback torture program

On Wed, Apr 20, 2011 at 02:23:29PM -0400, Chris Mason wrote:
> Hi everyone,
> 
> I dug out my old fsync latency tester to make sure Jens' new plugging
> code hadn't caused regressions.  This started off as a program Ted wrote
> during the firefox dark times, and I added some more code to saturate
> spindles with random IO.
> 
> The basic idea is:
> 
> 1) make a nice big sequential 8GB file
> 2) fork a process doing random buffered writes inside that file
> 3) overwrite a second 4K file in a loop, doing fsyncs as you go.
> 
> The test program times how long each write and fsync take in step three.
> The idea is that if we have problems with concurrent buffered writes and
> fsyncs, then all of our fsyncs will get stuck behind the random IO
> writeback and our latencies will be high.
> 
> For a while, xfs, btrfs and ext4 did really well at this test.  Our
> fsync latencies were very small and we all sent down synchronous IO that
> the elevator dispatched straight to the drive.
> 
> Things have changed though, both xfs and ext4 have grown code to do
> dramatically more IO than write_cache_pages has asked for (I'm pretty
> sure I told everyone this was a good idea at the time).  When doing
> sequential writes, this is a great idea.  When doing random IO, it leads
> to unbound stalls in balance_dirty_pages.
> 
> Here's an example run on xfs:
> 
> # fsync-tester
> setting up random write file
> done setting up random write file
> starting fsync run
> starting random io!
> write time 0.0009s fsync time: 2.0142s
> write time 128.9305s fsync time: 2.6046s
> run done 2 fsyncs total, killing random writer
> 
> In this case the 128s spent in write was on a single 4K overwrite on a
> 4K file.

Chris, You seem to be doing 1MB (32768*32) writes on fsync file instead of 4K.
I changed the size to 4K still not much difference though.

Once the program has exited because of high write time, i restarted it and
this time I don't see high write times.

First run
---------
# ./a.out 
setting up random write file
done setting up random write file
starting fsync run
starting random io!
write time: 0.0006s fsync time: 0.3400s
write time: 63.3270s fsync time: 0.3760s
run done 2 fsyncs total, killing random writer

Second run
----------
# ./a.out 
starting fsync run
starting random io!
write time: 0.0006s fsync time: 0.5359s
write time: 0.0007s fsync time: 0.3559s
write time: 0.0009s fsync time: 0.3113s
write time: 0.0008s fsync time: 0.4336s
write time: 0.0009s fsync time: 0.3780s
write time: 0.0008s fsync time: 0.3114s
write time: 0.0009s fsync time: 0.3225s
write time: 0.0009s fsync time: 0.3891s
write time: 0.0009s fsync time: 0.4336s
write time: 0.0009s fsync time: 0.4225s
write time: 0.0009s fsync time: 0.4114s
write time: 0.0007s fsync time: 0.4004s

Not sure why would that happen.

I am wondering why pwrite/fsync process was throttled. It did not have any
pages in page cache and it shouldn't have hit the task dirty limits. Does that
mean per task dirty limit logic does not work or I am completely missing
the root cause of the problem.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ