[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FC9B183.10605@oracle.com>
Date: Sat, 02 Jun 2012 14:24:03 +0800
From: Jeff Liu <jeff.liu@...cle.com>
To: Kirill Korotaev <dev@...allels.com>
CC: Serge Hallyn <serge.hallyn@...onical.com>,
"tytso@....edu" <tytso@....edu>,
"tinguely@....com" <tinguely@....com>,
"containers@...ts.linux-foundation.org"
<containers@...ts.linux-foundation.org>,
"david@...morbit.com" <david@...morbit.com>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
"hch@...radead.org" <hch@...radead.org>,
"bpm@....com" <bpm@....com>,
"christopher.jones@...cle.com" <christopher.jones@...cle.com>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
Jan Kara <jack@...e.cz>, "tm@....ma" <tm@....ma>,
"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
"chris.mason@...cle.com" <chris.mason@...cle.com>
Subject: Re: container disk quota
On 06/02/2012 02:06 PM, Kirill Korotaev wrote:
>
> On Jun 2, 2012, at 09:59 , Jeff Liu wrote:
>
>> Hi Serge,
>>
>> On 06/02/2012 12:04 AM, Serge Hallyn wrote:
>>
>>> Quoting Jan Kara (jack@...e.cz):
>>>> Hello,
>>>>
>>>> On Wed 30-05-12 22:58:54, jeff.liu@...cle.com wrote:
>>>>> According to glauber's comments regarding container disk quota, it should be binded to mount
>>>>> namespace rather than cgroup.
>>>>>
>>>>> Per my try out, it works just fine by combining with userland quota utilitly in this way.
>>>>> However, they are something has to be done at user tools too IMHO.
>>>>>
>>>>> Currently, the patchset is in very initial phase, I'd like to post it early to seek more
>>>>> feedbacks from you guys.
>>>>>
>>>>> Hopefully I can clarify my ideas clearly.
>>>> So what I miss in this introductory email is some highlevel description
>>>> like what is the desired functionality you try to implement and what is it
>>>> good for. Looking at the examples below, it seems you want to be able to
>>>> set quota limits for namespace-uid (and also namespace-gid???) pairs, am I
>>>> right?
>>>>
>>>> If yes, then I would like to understand one thing: When writing to a
>>>> file, used space is accounted to the owner of the file. Now how do we
>>>> determine owning namespace? Do you implicitely assume that only processes
>>>> from one namespace will be able to access the file?
>>>>
>>>> Honza
>>>
>>> Not having looked closely at the original patchset, let me ask - is this
>>> feature going to be a freebie with Eric's usernamespace patches?
>>
>> It we can reach a consensus to bind quota on mount namespace for
>> container or other things maybe.
>
> 1. OpenVZ doesn't use mount namespaces and still has quotas per container.
AFAICS, OpenVZ has self-released quota tools to supply this feature.
>
> 2. BTW, have you seen Dmitry Monakhov patches for same containers quotas via additional inode attribute? it allows to make it journaled.
You means the directly/project quota on ext4?
If yes, I have observed this feature back to the end of last year in
EXT4 mail list.
> How quotas are stored in your case?
It simply cached at memory for now, it also can be tweak up to journaled
I think, if introducing corresponding routines quota_read/quota_write to
particular journal file system.
>
> 3. I tend to think nowdays such quotas maybe of less need. Quota code doesn't scale well. And it's easier to put container in image file (as OpenVZ recently introduced).
There have such requirements dropped to LXC mail list nowadays.
Directory quota is pretty cool and it also useful to containers perspective.
However, that's two different quota mechanism.
"Quota code doesn't scale well".
Do you means it have global locking mechanism and only quota structure
to bill up quota for all file systems with VFS quota enabled?
I noticed that OpenVZ has introduced an image file to supply container
quota, and especially for the container migration consideration.
However, could it be a general solution to LXC?
Thanks,
-Jeff
>
> Thanks,
> Kirill
>
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists