lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0708171333370.9404@schroedinger.engr.sgi.com>
Date:	Fri, 17 Aug 2007 13:37:05 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	miklos@...redi.hu, akpm@...ux-foundation.org, neilb@...e.de,
	dgc@....com, tomoki.sekiyama.qu@...achi.com, nikita@...sterfs.com,
	trond.myklebust@....uio.no, yingchao.zhou@...il.com,
	richard@....demon.co.uk, torvalds@...ux-foundation.org, pj@....com
Subject: Re: [PATCH 00/23] per device dirty throttling -v9

On Fri, 17 Aug 2007, Peter Zijlstra wrote:

> Currently we do: 
>   dirty = total_dirty * bdi_completions_p * task_dirty_p
> 
> As dgc pointed out before, there is the issue of bdi/task correlation,
> that is, we do not track task dirty rates per bdi, so now a task that
> heavily dirties on one bdi will also get penalised on the others (and
> similar issues).

I think that is tolerable.
> 
> If we were to change it so:
>   dirty = cpuset_dirty * bdi_completions_p * task_dirty_p
> 
> We get additional correlation issues: cpuset/bdi, cpuset/task.
> Which could yield surprising results if some bdis are strictly per
> cpuset.

If we do not do the above then the dirty page calculation for a small 
cpuset (F.e. 1 node of a 128 node system) could allow an amount of dirty
pages that will fill up all the node.

> The cpuset/task correlation has a strict mapping and could be solved by
> keeping the vm_dirties counter per cpuset. However, this would seriously
> complicate the code and I'm not sure if it would gain us much.

The patchset that I referred to has code to calculate the dirty count and 
ratio per cpuset by looping over the nodes. Currently we are having 
trouble with small cpusets not performing writeout correctly. This 
sometimes may result in OOM conditions because the whole node is full of 
dirty pages. If the cpu boundaries are enforced in a strict way then the 
application may fail with an OOM.

We can compensate by recalculating the dirty_ratio based on the smallest 
cpuset but then larger cpusets are penalized. Also one cannot set the 
dirty_ratio below a certain mininum.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ