lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 8 Jan 2009 20:57:28 +0100
From:	Jan Kara <jack@...e.cz>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Chris Mason <chris.mason@...cle.com>,
	David Miller <davem@...emloft.net>, akpm@...ux-foundation.org,
	peterz@...radead.org, jack@...e.cz, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, npiggin@...e.de
Subject: Re: Increase dirty_ratio and dirty_background_ratio?

On Thu 08-01-09 09:05:01, Linus Torvalds wrote:
> On Thu, 8 Jan 2009, Chris Mason wrote:
> > 
> > Does it make sense to hook into kupdate?  If kupdate finds it can't meet
> > the no-data-older-than 30 seconds target, it lowers the sync/async combo
> > down to some reasonable bottom.  
> > 
> > If it finds it is going to sleep without missing the target, raise the
> > combo up to some reasonable top.
> 
> I like autotuning, so that sounds like an intriguing approach. It's worked 
> for us before (ie VM).
> 
> That said, 30 seconds sounds like a _loong_ time for something like this. 
> I'd use the normal 5-second dirty_writeback_interval for this: if we can't 
> clean the whole queue in that normal background writeback interval, then 
> we try to lower the tagets. We already have that "congestion_wait()" thing 
> there, that would be a logical place, methinks.
  But I think there are workloads for which this is suboptimal to say the
least. Imagine you do some crazy LDAP database crunching or other similar load
which randomly writes to a big file (big means it's size is rougly
comparable to your available memory). Kernel finds pdflush isn't able to
flush the data fast enough so we decrease dirty limits. This results in
even more agressive flushing but that makes things even worse (in a sence
that your application runs slower and the disk is busy all the time anyway).
This is the kind of load where we observe problems currently.
  Ideally we could observe that we write out the same pages again and again
(or even pages close to them) and in that case be less agressive about
writeback on the file. But it feels a bit overcomplicated...

> I'm not sure how to raise them, though. We don't want to raise any limits 
> just because the user suddenly went idle. I think the raising should 
> happen if we hit the sync/async ratio, and we haven't lowered in the last 
> 30 seconds or something like that.

									Honza
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ