lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120410151326.GA4936@quack.suse.cz>
Date:	Tue, 10 Apr 2012 17:13:26 +0200
From:	Jan Kara <jack@...e.cz>
To:	Michael Tokarev <mjt@....msk.ru>
Cc:	Dave Chinner <david@...morbit.com>, Jan Kara <jack@...e.cz>,
	Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: dramatic I/O slowdown after upgrading 2.6.38->3.0+

On Tue 10-04-12 10:00:38, Michael Tokarev wrote:
> On 10.04.2012 06:26, Dave Chinner wrote:
> 
> > Barriers. Turn them off, and see if that fixes your problem.
> 
> Thank you Dave for a hint.  And nope, that's not it, not at all... ;)
> While turning off barriers helps a tiny bit, to gain a few %% from
> the huge slowdown, it does not cure the issue.
> 
> Meanwhile, I observed the following:
> 
> 1) the issue persists on more recent kernels too, I tried 3.3
>    and it is also as slow as 3.0.
> 
> 2) at least 2.6.38 kernel works fine, as fast as 2.6.32, I'll
>    try 2.6.39 next.
> 
>    I updated $subject accordingly.
> 
> 3) the most important thing I think: this is general I/O speed
>    issue.  Here's why:
> 
>   2.6.38:
>   # dd if=/dev/sdb of=/dev/null bs=1M iflag=direct count=100
>   100+0 records in
>   100+0 records out
>   104857600 bytes (105 MB) copied, 1.73126 s, 60.6 MB/s
> 
>   3.0:
>   # dd if=/dev/sdb of=/dev/null bs=1M iflag=direct count=100
>   100+0 records in
>   100+0 records out
>   104857600 bytes (105 MB) copied, 29.4508 s, 3.6 MB/s
> 
> That's about 20 times difference on direct read from the
> same - idle - device!!
  Huh, that's a huge difference for such a trivial load. So we can rule out
filesystems, writeback, mm. I also wouldn't think it's IO scheduler but
you can always check by comparing dd numbers after
  echo none >/sys/block/sdb/queue/scheduler
Anyway, the most likely cause seems to be some driver issue (which would
also explain why you can see it only on one machine). I'd also compare very
closely config files of the two kernels if there isn't some unexpected
difference...

								Honza
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ