lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 28 Jan 2014 00:55:18 +0100
From:	Jan Kara <jack@...e.cz>
To:	Benjamin LaHaise <bcrl@...ck.org>
Cc:	linux-ext4@...r.kernel.org
Subject: Re: high write latency bug in ext3 / jbd in 3.4

  Hello,

On Mon 13-01-14 15:13:20, Benjamin LaHaise wrote:
> I've recently encountered a bug in ext3 where the occasional write is 
> showing extremely high latency, on the order of 2.2 to 11 seconds compared 
> to a more typical 200-300ms.  This is happening on a 3.4.67 kernel.  When 
> this occurs, the system is writing to disk somewhere between 290-330MB/s.  
> The test takes anywhere from 3 to 12 minutes into a run to trigger the 
> high latency write.  During one of these high latency writes, vmstat reports 
> 0 blocks being written to disk.  The disk array being written to is able to 
> write quite a bit faster (about ~770MB/s).
> 
> The setup is a bit complicated, but is completely reproducible.  The 
> workload consists of about 8 worker threads creating and then writing out 
> spool files that are a little under 8MB in size.  After each write, the file 
> and the directory it is in are then fsync()d.  The latency measured is from 
> the beginning open() of a spool file until the final fsync() completes.
> 
> Poking around the system with latencytop shows that sleep_on_buffer() is 
> where all the latency is coming from, leading to log_wait_commit() showing 
> the very high latency for the fsync()s.  This leads me to believe that jbd 
> is somehow not properly flushing a buffer being waited on in a timely 
> fashion.  Changing elevator in use has no effect.
  I'm not sure if you haven't switched to ext4 as others have suggested in
this thread. If not:
  1) Since the stall is so long, can you run
     'echo w >/proc/sysrq-trigger'
     when the stall happens and send the stack traces from kernel log?
  2) Are you running with 'barrier' option?

> Does anyone have any ideas on where to look in ext3 or jbd for something 
> that might be causing this behaviour?  If I use ext4 to mount the ext3 
> filesystem being tested, the problem goes away.  Testing on newer kernels 
> is not very easy to do (the system has other dependencyies on the 3.4 
> kernel).  Thoughts?
  My suspicion is we are hanging on writing the 'commit' block of a
transaction. That issues a cache flush to the storage and that can take
quite a bit of time if we are unlucky.

								Honza
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ