lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 23 Jun 2007 20:23:50 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Matt Mackall <mpm@...enic.com>, Jens Axboe <jens.axboe@...cle.com>,
	Andrew Morton <akpm@...ux-foundation.org>, davej@...hat.com,
	tim.c.chen@...ux.intel.com, linux-kernel@...r.kernel.org
Subject: Re: Change in default vm_dirty_ratio

On Thu, 2007-06-21 at 16:08 -0700, Linus Torvalds wrote:
> 
> On Thu, 21 Jun 2007, Matt Mackall wrote:
> > 
> > Perhaps we want to throw some sliding window algorithms at it. We can
> > bound requests and total I/O and if requests get retired too slowly we
> > can shrink the windows. Alternately, we can grow the window if we're
> > retiring things within our desired timeframe.
> 
> I suspect that would tend to be a good way to go. But it almost certainly 
> has to be per-device, which implies that somebody would have to do some 
> major coding/testing on this..
> 
> The vm_dirty_ratio thing is a global value, and I think we need that 
> regardless (for the independent issue of memory deadlocks etc), but if we 
> *additionally* had a per-device throttle that was based on some kind of 
> adaptive thing, we could probably raise the global (hard) vm_dirty_ratio a 
> lot.

I just did quite a bit of that:

  http://lkml.org/lkml/2007/6/14/437

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ