lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 12 Oct 2009 11:29:48 +0200
From:	Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>
To:	Wu Fengguang <fengguang.wu@...el.com>
CC:	Martin Schwidefsky <schwidefsky@...ibm.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] mm: make VM_MAX_READAHEAD configurable

Wu Fengguang wrote:
> [SNIP]
>>> May I ask for more details about your performance regression and why
>>> it is related to readahead size? (we didn't change VM_MAX_READAHEAD..)
>>>   
>>>       
>> Sure, the performance regression appeared when comparing Novell SLES10 
>> vs. SLES11.
>> While you are right Wu that the upstream default never changed so far, 
>> SLES10 had a
>> patch applied that set 512.
>>     
>
> I see. I'm curious why SLES11 removed that patch. Did it experienced
> some regressions with the larger readahead size?
>
>   

Only the obvious expected one with very low free/cacheable
memory and a lot of parallel processes that do sequential I/O.
The RA size scales up for all of them but 64xMaxRA then
doesn't fit.

For example iozone with 64 threads (each on one disk for its own),
sequential access pattern read with I guess 10 M free for cache
suffered by ~15% due to trashing.

But that is a acceptable regression because it is no relevant
customer scenario, while the benefits apply to customer scenarios.

[...]
>> And as Andrew mentioned the diversity of devices cause any default to be 
>> wrong for one
>> or another installation. To solve that the udev approach can also differ 
>> between different
>> device types (might be easier on s390 than on other architectures 
>> because I need to take
>> care of two disk types atm - and both shold get 512).
>>     
>
> I guess it's not a general solution for all. There are so many
> devices in the world, and we have not yet considered the
> memory/workload combinations.
>   
I completely agree, let me fix "my" issue per udev for now.
And if some day the readahead mechanism evolves and
doesn't need any max RA at all we can all be happy.

[...]

-- 

GrĂ¼sse / regards, Christian Ehrhardt
IBM Linux Technology Center, Open Virtualization 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ