lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F85C59B.3090707@msgid.tls.msk.ru>
Date:	Wed, 11 Apr 2012 21:55:39 +0400
From:	Michael Tokarev <mjt@....msk.ru>
To:	Mike Christie <michaelc@...wisc.edu>
CC:	Jan Kara <jack@...e.cz>, Dave Chinner <david@...morbit.com>,
	Kernel Mailing List <linux-kernel@...r.kernel.org>,
	SCSI Mailing List <linux-scsi@...r.kernel.org>
Subject: Re: dramatic I/O slowdown after upgrading 2.6.38->3.0+

On 11.04.2012 21:19, Mike Christie wrote:
> On 04/11/2012 04:40 AM, Michael Tokarev wrote:
>> On 10.04.2012 19:13, Jan Kara wrote:
>>>> On Tue 10-04-12 10:00:38, Michael Tokarev wrote:
>> []
>>>>>>   2.6.38:
>>>>>>   # dd if=/dev/sdb of=/dev/null bs=1M iflag=direct count=100
>>>>>>   100+0 records in
>>>>>>   100+0 records out
>>>>>>   104857600 bytes (105 MB) copied, 1.73126 s, 60.6 MB/s
>>>>>>
>>>>>>   3.0:
>>>>>>   # dd if=/dev/sdb of=/dev/null bs=1M iflag=direct count=100
>>>>>>   100+0 records in
>>>>>>   100+0 records out
>>>>>>   104857600 bytes (105 MB) copied, 29.4508 s, 3.6 MB/s
>>>>>>
>>>>>> That's about 20 times difference on direct read from the
>>>>>> same - idle - device!!
>>>>   Huh, that's a huge difference for such a trivial load. So we can rule out
>>>> filesystems, writeback, mm. I also wouldn't think it's IO scheduler but
>>>> you can always check by comparing dd numbers after
>>>>   echo none >/sys/block/sdb/queue/scheduler
> 
> Did you try newer 3.X kernels or just 3.0?

I tried 3.3.1, it shows exactly the same very slow speed
(about 3 MB/sec vs 60 MB/sec).

> We were hitting a similar problem with iscsi. Same workload and it
> started with 2.6.38. I think it turned out to be this issue:
> 
> // thread with issue like what we hit:
> http://thread.gmane.org/gmane.linux.kernel/1244680

This thread refers to buffered I/O as far as I can see.  Note
I especially used iflag=direct of dd to rule out all buffer
operations.  The I/O really is very very slow, the disk is
100% busy all this time (which is also not the situation
described in the thread you referenced above - there, disk
(SSD) does not have enough work to do).

> // Patch that I think fixed issue:
> commit 3deaa7190a8da38453c4fabd9dec7f66d17fff67
> Author: Shaohua Li <shaohua.li@...el.com>
> Date:   Fri Feb 3 15:37:17 2012 -0800
> 
>     readahead: fix pipeline break caused by block plug

I think this patch is included into 3.3 kernel, it was
in 3.3-rc2 if my git-fu is right.  If it is, I tried it
(as 3.3.1) and it didn't help at all.

Thank you!

/mjt
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ