lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0704280824420.2502@artax.karlin.mff.cuni.cz>
Date:	Sat, 28 Apr 2007 08:32:38 +0200 (CEST)
From:	Mikulas Patocka <mikulas@...ax.karlin.mff.cuni.cz>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Mike Galbraith <efault@....de>,
	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Jens Axboe <jens.axboe@...cle.com>
Subject: Re: [ext3][kernels >= 2.6.20.7 at least] KDE going comatose when FS
 is under heavy write load (massive starvation)



On Fri, 27 Apr 2007, Linus Torvalds wrote:

>
>
> On Fri, 27 Apr 2007, Mike Galbraith wrote:
>>
>> As subject states, my GUI is going away for extended periods of time
>> when my very full and likely highly fragmented (how to find out)
>> filesystem is under heavy write load.  While write is under way, if
>> amarok (mp3 player) is running, no song change will occur until write is
>> finished, and the GUI can go _entirely_ comatose for very long periods.
>> Usually, it will come back to life after write is finished, but
>> occasionally, a complete GUI restart is necessary.
>
> One thing to try out (and dammit, I should make it the default now in
> 2.6.21) is to just make the dirty limits much lower. We've been talking
> about this for ages, I think this might be the right time to do it.
>
> Especially with lots of memory, allowing 40% of that memory to be dirty is
> just insane (even if we limit it to "just" 40% of the normal memory zone.
> That can be gigabytes. And no amount of IO scheduling will make it
> pleasant to try to handle the situation where that much memory is dirty.

What about using different dirtypage limits for different processes?

--- i.e. every process has dirtypage activity counter, that is increased 
when it dirties a page and decreased over time. Compute the limit for 
process as some inverse of this counter --- so that processes that dirtied 
a lot of pages will be blocked at lower limit and processes that dirtied 
few pages will be blocked at higher limit.

The main problem is that if the user extracts tar archive, tar eventually 
blocks on writeback I/O --- O.K. But if bash attempts to write one page to 
.bash_history file at the same time, it blocks too --- bad, the user is 
annoyed.

(I don't have time to write and test it, it is just an idea --- I found 
these writeback lockups of the whole system annoying too)

Mikulas
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ