lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090108030245.e7c8ceaf.akpm@linux-foundation.org>
Date:	Thu, 8 Jan 2009 03:02:45 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	David Miller <davem@...emloft.net>
Cc:	torvalds@...ux-foundation.org, peterz@...radead.org, jack@...e.cz,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org, npiggin@...e.de
Subject: Re: Increase dirty_ratio and dirty_background_ratio?

On Wed, 07 Jan 2009 12:51:33 -0800 (PST) David Miller <davem@...emloft.net> wrote:

> From: Linus Torvalds <torvalds@...ux-foundation.org>
> Date: Wed, 7 Jan 2009 08:39:01 -0800 (PST)
> 
> > On Wed, 7 Jan 2009, Peter Zijlstra wrote:
> > > 
> > > >   So the question is: What kind of workloads are lower limits supposed to
> > > > help? Desktop? Has anybody reported that they actually help? I'm asking
> > > > because we are probably going to increase limits to the old values for
> > > > SLES11 if we don't see serious negative impact on other workloads...
> > > 
> > > Adding some CCs.
> > > 
> > > The idea was that 40% of the memory is a _lot_ these days, and writeback
> > > times will be huge for those hitting sync or similar. By lowering these
> > > you'd smooth that out a bit.
> > 
> > Not just a bit. If you have 4GB of RAM (not at all unusual for even just a 
> > regular desktop, never mind a "real" workstation), it's simply crazy to 
> > allow 1.5GB of dirty memory. Not unless you have a really wicked RAID 
> > system with great write performance that can push it out to disk (with 
> > seeking) in just a few seconds.
> > 
> > And few people have that.
> > 
> > For a server, where throughput matters but latency generally does not, go 
> > ahead and raise it. But please don't raise it for anything sane. The only 
> > time it makes sense upping that percentage is for some odd special-case 
> > benchmark that otherwise can fit the dirty data set in memory, and never 
> > syncs it (ie it deletes all the files after generating them).
> > 
> > In other words, yes, 40% dirty can make a big difference to benchmarks, 
> > but is almost never actually a good idea any more.
> 
> I have to say that my workstation is still helped by reverting this
> change and all I do is play around in GIT trees and read email.
> 

The kernel can't get this right - it doesn't know the usage
patterns/workloads, etc.  It's rather disappointing that distros appear
to have put so little work into finding ways of setting suitable values
for this, and for other tunables.

Maybe we should set them to 1%, or 99% or something similarly stupid to
force the issue.

yes, perhaps the kernel's default percentage should be larger on
smaller-memory systems.  And smaller on slow-disk systems.  etc.  But
initscripts already have all the information to do this, and have the
advantage that any such scripts are backportable to five-year-old kernels.

So I say leave it as-is.  If suse can come up with a scriptlet which scales
this according to memory size, disk speed, workload, etc then good for
them - it'll produce a better end result.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ