lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <200703071353.l27DrdUx030558@sdl99w.sdl.hitachi.co.jp>
Date:	Wed, 07 Mar 2007 22:53:54 +0900
From:	Yuji Kakutani <yuji.kakutani.uw@...achi.com>
To:	Leroy van Logchem <leroy.vanlogchem@...elft.nl>
Cc:	linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
	miklos@...redi.hu, yumiko.sugita.yf@...achi.com,
	masami.hiramatsu.pt@...achi.com, hidehiro.kawai.ez@...achi.com,
	tomoki.sekiyama.qu@...achi.com, soshima@...hat.com,
	haoki@...hat.com
Subject: Re: [RFC][PATCH 0/3] VM throttling: avoid blocking occasional writers

Hi,
Thank you for your comments.

Leroy van Logchem wrote:
>The default dirty_ratio on most 2.6 kernels tend to be too large imo.
>If you are going to do sustained writes multiple times the size of
>the memory you have at least two problems.
>
>1) The precious dentry and inodecache will be dropped leaving you with
>a *very* unresponsive system
>2) The amount of dirty_pages which need to be flushed to disk is huge,
>taking all VM while the i/o channel takes uninterruptable time to怀flush it.

I agree that the default dirty_ratio is too large.
What I have observed is that write(2) to a disk only with light load is
blocked in balance_dirty_pages() due to heavy write to *another* disk,
and doesn't seem to be the same issue as you say. But decreasing
dirty_ratio shortens the blocking time also in our experiments.

>What we really need is a 'cfq' for all processes -especially misbehaving
>ones like dd if=/dev/zero of=/location/large bs=1M count=10000-.
>If you want to DoS the 2.6 kernel, start a ever running dd write and
>you know what I mean. Huge latencies due the fact that all name_to_inode
>caches are lost and have to be fetched from disk again only to be quickly
>flushed again and again. I already explained this disaster scenario with
>Linus, Andrew and Jens; hoping for a auto-tuning solution which takes
>diskspeed per partition into account.

I appreciate for your information.
Would you know about any progress in this issue?


Regards,
-- 
Tomoki Sekiyama & Yuji Kakutani
Hitachi, Ltd., Systems Development Laboratory

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ