lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4548472A.50608@in.ibm.com>
Date:	Wed, 01 Nov 2006 12:35:14 +0530
From:	Balbir Singh <balbir@...ibm.com>
To:	Paul Menage <menage@...gle.com>
CC:	dev@...nvz.org, vatsa@...ibm.com, sekharan@...ibm.com,
	ckrm-tech@...ts.sourceforge.net, haveblue@...ibm.com,
	linux-kernel@...r.kernel.org, pj@....com, matthltc@...ibm.com,
	dipankar@...ibm.com, rohitseth@...gle.com
Subject: Re: [ckrm-tech] RFC: Memory Controller

Paul Menage wrote:
> On 10/31/06, Balbir Singh <balbir@...ibm.com> wrote:
>> I am still a little concerned about how limit size changes will be implemented.
>> Will the cpuset "mems" field change to reflect the changed limits?
> 
> That's how we've been doing it - increasing limits is easy, shrinking
> them is harder ...
> 
>>> Page cache control is actually more essential that RSS control, in our
>>> experience - it's pretty easy to track RSS values from userspace, and
>>> react reasonably quickly to kill things that go over their limit, but
>>> determining page cache usage (i.e. determining which job on the system
>>> is flooding the page cache with dirty buffers) is pretty much
>>> impossible currently.
>>>
>> Hmm... interesting. Why do you think its impossible, what are the kinds of
>> issues you've run into?
>>
> 
> Issues such as:
> 
> - determining from userspace how much of the page cache is really
> "free" memory that can be given out to new jobs without impacting the
> performance of existing jobs
> 
> - determining which job on the system is flooding the page cache with
> dirty buffers
> 
> - accounting the active pagecache usage of a job as part of its memory
> footprint (if a process is only 1MB large but is seeking randomly
> through a 1GB file, treating it as only using/needing 1MB isn't
> practical).
> 
> Paul
> 

Thanks for the info!

I thought this would be hard to do in general, but with a page -->
container mapping that will come as a result of the memory controller,
will it still be that hard?

I'll dig deeper.

-- 

	Balbir Singh,
	Linux Technology Center,
	IBM Software Labs
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ