[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <44D76B43.5080507@sw.ru>
Date: Mon, 07 Aug 2006 20:33:07 +0400
From: Kirill Korotaev <dev@...ru>
To: "Martin J. Bligh" <mbligh@...igh.org>
CC: vatsa@...ibm.com, Alan Cox <alan@...rguk.ukuu.org.uk>,
Andrew Morton <akpm@...l.org>, mingo@...e.hu,
nickpiggin@...oo.com.au, sam@...ain.net,
linux-kernel@...r.kernel.org, dev@...nvz.org, efault@....de,
balbir@...ibm.com, sekharan@...ibm.com, nagar@...son.ibm.com,
haveblue@...ibm.com, pj@....com, Andrey Savochkin <saw@...ru>
Subject: Re: [RFC, PATCH 0/5] Going forward with Resource Management - A cpu
controller
>> we already have the code to account page fractions shared between
>> containers.
>> Though, it is quite useless to do so for threads... Since this numbers
>> have no meaning (not a real usage)
>> and only the sum of it will be a correct value.
>>
> THat sort of accounting poses various horrible problems, which is
> why we steered away from it. If you share pages between containers
> (presumably billing them equal shares per user), what happens
> when you're already at your limit, and one of your sharer's exits?
you come across your limit and new allocations will fail.
BUT! IMPORTANT!
in real life use case with OpenVZ we allow sharing not that much data across containers:
vmas mapped as private, i.e. glibc and other libraries .data section
(and .code if it is writable). So if you use the same glibc and other executables
in the containers then your are charged only a fraction of phys memory used by it.
This kind of sharing is not that huge (<< memory limits usually),
so the situation you described is not a problem
in real life (at least for OpenVZ).
> Plus, are you billing by vma or address_space?
not sure what you mean. we charge pages. first by whole vma.
but as page gets more used by other containers your usage decreases.
Thanks,
Kirill
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists