lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 13 Jan 2014 14:01:08 -0700
From:	Andreas Dilger <adilger@...ger.ca>
To:	Benjamin LaHaise <bcrl@...ck.org>
Cc:	Ext4 Developers List <linux-ext4@...r.kernel.org>
Subject: Re: high write latency bug in ext3 / jbd in 3.4

On Jan 13, 2014, at 1:13 PM, Benjamin LaHaise <bcrl@...ck.org> wrote:
> I've recently encountered a bug in ext3 where the occasional write
> is showing extremely high latency, on the order of 2.2 to 11 seconds
> compared to a more typical 200-300ms.  This is happening on a 3.4.67
> kernel.  When this occurs, the system is writing to disk somewhere
> between 290-330MB/s.  The test takes anywhere from 3 to 12 minutes
> into a run to trigger the high latency write.  During one of these
> high latency writes, vmstat reports 0 blocks being written to disk.
> The disk array being written to is able to write quite a bit faster
> (about ~770MB/s).
> 
> The setup is a bit complicated, but is completely reproducible.
> The workload consists of about 8 worker threads creating and then
> writing out spool files that are a little under 8MB in size.  After
> each write, the file and the directory it is in are then fsync()d.
> The latency measured is from the beginning open() of a spool file
> until the final fsync() completes.
> 
> Poking around the system with latencytop shows that sleep_on_buffer()
> is where all the latency is coming from, leading to log_wait_commit()
> showing the very high latency for the fsync()s.  This leads me to
> believe that jbd is somehow not properly flushing a buffer being
> waited on in a timely fashion.  Changing elevator in use has no effect.
> 
> Does anyone have any ideas on where to look in ext3 or jbd for something
> that might be causing this behaviour?  If I use ext4 to mount the ext3
> filesystem being tested, the problem goes away.  Testing on newer
> kernels is not very easy to do (the system has other dependencyies on
> the 3.4 kernel).  Thoughts?

Not to be flippant, but is there any reason NOT to just mount the
filesystem with ext4?  There are a large number of improvements in
the ext4 code that don't require on-disk format changes (e.g. delayed
allocation, multi-block allocation, etc) if there is a concern about
being able to downgrade to an ext3-type mount in case of problems.

There are further improvements in ext4 that can be used on upgraded
ext3 filesystems if the feature bit is enabled (in particular extent
mapped files).  However, extent mapped files are not accessible under
ext3, so it makes sense to run with ext4 w/o any new features for a
while until you are sure it is working for you.

Using delalloc, mballoc, and extents can reduce application visible
read, write, and unlink latency significantly, because the blocks are
being allocated and freed in contiguous chunks after the file is
written from userspace.

We've been discussing deleting the ext3 code in favour of ext4 for
a while already, and newer Fedora and RHEL kernels are using the
ext4 code to mount ext2- and ext3-formatted filesystems for a while
already.

Cheers, Andreas






Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ