lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BF5D875.3030900@acm.org>
Date:	Thu, 20 May 2010 18:48:53 -0600
From:	Zan Lynx <zlynx@....org>
To:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
CC:	lwoodman@...hat.com, LKML <linux-kernel@...r.kernel.org>,
	linux-mm <linux-mm@...ck.org>, Nick Piggin <npiggin@...e.de>,
	Jan Kara <jack@...e.cz>
Subject: Re: RFC: dirty_ratio back to 40%

On 5/20/10 5:48 PM, KOSAKI Motohiro wrote:
> Hi
>
> CC to Nick and Jan
>
>> We've seen multiple performance regressions linked to the lower(20%)
>> dirty_ratio.  When performing enough IO to overwhelm the background
>> flush daemons the percent of dirty pagecache memory quickly climbs
>> to the new/lower dirty_ratio value of 20%.  At that point all writing
>> processes are forced to stop and write dirty pagecache pages back to disk.
>> This causes performance regressions in several benchmarks as well as causing
>> a noticeable overall sluggishness.  We all know that the dirty_ratio is
>> an integrity vs performance trade-off but the file system journaling
>> will cover any devastating effects in the event of a system crash.
>>
>> Increasing the dirty_ratio to 40% will regain the performance loss seen
>> in several benchmarks.  Whats everyone think about this???
>
> In past, Jan Kara also claim the exactly same thing.
>
> 	Subject: [LSF/VM TOPIC] Dynamic sizing of dirty_limit
> 	Date: Wed, 24 Feb 2010 15:34:42 +0100
>
> 	>  (*) We ended up increasing dirty_limit in SLES 11 to 40% as it used to be
> 	>  with old kernels because customers running e.g. LDAP (using BerkelyDB
> 	>  heavily) were complaining about performance problems.
>
> So, I'd prefer to restore the default rather than both Redhat and SUSE apply exactly
> same distro specific patch. because we can easily imazine other users will face the same
> issue in the future.

On desktop systems the low dirty limits help maintain interactive feel. 
Users expect applications that are saving data to be slow. They do not 
like it when every application in the system randomly comes to a halt 
because of one program stuffing data up to the dirty limit.

The cause and effect for the system slowdown is clear when the dirty 
limit is low. "I saved data and now the system is slow until it is 
done." When the dirty page ratio is very high, the cause and effect is 
disconnected. "I was just web surfing and the system came to a halt."

I think we should expect server admins to do more tuning than desktop 
users, so the default limits should stay low in my opinion.

-- 
Zan Lynx
zlynx@....org

"Knowledge is Power.  Power Corrupts.  Study Hard.  Be Evil."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ