lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0811042036000.31167@quilx.com>
Date:	Tue, 4 Nov 2008 20:45:17 -0600 (CST)
From:	Christoph Lameter <cl@...ux-foundation.org>
To:	Andrew Morton <akpm@...ux-foundation.org>
cc:	peterz@...radead.org, rientjes@...gle.com, npiggin@...e.de,
	menage@...gle.com, dfults@....com, linux-kernel@...r.kernel.org,
	containers@...ts.osdl.org
Subject: Re: [patch 0/7] cpuset writeback throttling

On Tue, 4 Nov 2008, Andrew Morton wrote:

> In a memcg implementation what we would implement is "throttle
> page-dirtying tasks in this memcg when the memcg's dirty memory reaches
> 40% of its total".

Right that is similar to what this patch does for cpusets. A memcg 
implementation would need to figure out if we are currently part of a 
memcg and then determine the percentage of memory that is dirty.

That is one aspect. When performing writeback then we need to figure out 
which inodes have dirty pages in the memcg and we need to start writeout 
on those inodes and not on others that have their dirty pages elsewhere. 
There are two components of this that are in this patch and that would 
also have to be implemented for a memcg.

> But that doesn't solve the problem which this patchset is trying to
> solve, which is "don't let all the memory in all this group of nodes
> get dirty".

This patch would solve the problem if the calculation of the dirty pages 
would consider the active memcg and be able to determine the amount of 
dirty pages (through some sort of additional memcg counters). That is just 
the first part though. The second part of finding the inodes that have 
dirty pages for writeback would require an association between memcgs and 
inodes.

> What happens if cpuset A uses nodes 0,1,2,3,4,5,6,7,8,9 and cpuset B
> uses nodes 0,1?  Can activity in cpuset A cause ooms in cpuset B?

Yes if the activities of cpuset A cause all pages to be dirtied in cpuset 
B and then cpuset B attempts to do writeback. This will fail to acquire 
enough memory for writeback and make reclaim impossible.

Typically cpusets are not overlapped like that but used to segment the 
system.

The system would work correctly if the dirty ratio calculation would be 
done on all overlapping cpusets/memcg groups that contain nodes from 
which allocations are permitted.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ