lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0901070833430.3057@localhost.localdomain>
Date:	Wed, 7 Jan 2009 08:39:01 -0800 (PST)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Peter Zijlstra <peterz@...radead.org>
cc:	Jan Kara <jack@...e.cz>, linux-kernel@...r.kernel.org,
	linux-mm <linux-mm@...ck.org>, Nick Piggin <npiggin@...e.de>
Subject: Re: Increase dirty_ratio and dirty_background_ratio?



On Wed, 7 Jan 2009, Peter Zijlstra wrote:
> 
> >   So the question is: What kind of workloads are lower limits supposed to
> > help? Desktop? Has anybody reported that they actually help? I'm asking
> > because we are probably going to increase limits to the old values for
> > SLES11 if we don't see serious negative impact on other workloads...
> 
> Adding some CCs.
> 
> The idea was that 40% of the memory is a _lot_ these days, and writeback
> times will be huge for those hitting sync or similar. By lowering these
> you'd smooth that out a bit.

Not just a bit. If you have 4GB of RAM (not at all unusual for even just a 
regular desktop, never mind a "real" workstation), it's simply crazy to 
allow 1.5GB of dirty memory. Not unless you have a really wicked RAID 
system with great write performance that can push it out to disk (with 
seeking) in just a few seconds.

And few people have that.

For a server, where throughput matters but latency generally does not, go 
ahead and raise it. But please don't raise it for anything sane. The only 
time it makes sense upping that percentage is for some odd special-case 
benchmark that otherwise can fit the dirty data set in memory, and never 
syncs it (ie it deletes all the files after generating them).

In other words, yes, 40% dirty can make a big difference to benchmarks, 
but is almost never actually a good idea any more.

That said, the _right_ thing to do is to 

 (a) limit dirty by number of bytes (in addition to having a percentage 
     limit). Current -git adds support for that.

 (b) scale it dynamically by your IO performance. No, current -git does 
     _not_ support this.

but just upping the percentage is not a good idea.

		Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ