lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87oay9j1pr.fsf@x220.int.ebiederm.org>
Date:	Tue, 03 Jun 2014 11:18:56 -0700
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Pavel Emelyanov <xemul@...allels.com>
Cc:	Serge Hallyn <serge.hallyn@...ntu.com>, Marian Marinov <mm@...com>,
	Linux Containers <containers@...ts.linux-foundation.org>,
	LXC development mailing-list 
	<lxc-devel@...ts.linuxcontainers.org>,
	"linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] Per-user namespace process accounting

Pavel Emelyanov <xemul@...allels.com> writes:

> On 06/03/2014 09:26 PM, Serge Hallyn wrote:
>> Quoting Pavel Emelyanov (xemul@...allels.com):
>>> On 05/29/2014 07:32 PM, Serge Hallyn wrote:
>>>> Quoting Marian Marinov (mm@...com):
>>>>> -----BEGIN PGP SIGNED MESSAGE-----
>>>>> Hash: SHA1
>>>>>
>>>>> On 05/29/2014 01:06 PM, Eric W. Biederman wrote:
>>>>>> Marian Marinov <mm@...com> writes:
>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> I have the following proposition.
>>>>>>>
>>>>>>> Number of currently running processes is accounted at the root user namespace. The problem I'm facing is that
>>>>>>> multiple containers in different user namespaces share the process counters.
>>>>>>
>>>>>> That is deliberate.
>>>>>
>>>>> And I understand that very well ;)
>>>>>
>>>>>>
>>>>>>> So if containerX runs 100 with UID 99, containerY should have NPROC limit of above 100 in order to execute any 
>>>>>>> processes with ist own UID 99.
>>>>>>>
>>>>>>> I know that some of you will tell me that I should not provision all of my containers with the same UID/GID maps,
>>>>>>> but this brings another problem.
>>>>>>>
>>>>>>> We are provisioning the containers from a template. The template has a lot of files 500k and more. And chowning
>>>>>>> these causes a lot of I/O and also slows down provisioning considerably.
>>>>>>>
>>>>>>> The other problem is that when we migrate one container from one host machine to another the IDs may be already
>>>>>>> in use on the new machine and we need to chown all the files again.
>>>>>>
>>>>>> You should have the same uid allocations for all machines in your fleet as much as possible.   That has been true
>>>>>> ever since NFS was invented and is not new here.  You can avoid the cost of chowning if you untar your files inside
>>>>>> of your user namespace.  You can have different maps per machine if you are crazy enough to do that.  You can even
>>>>>> have shared uids that you use to share files between containers as long as none of those files is setuid.  And map
>>>>>> those shared files to some kind of nobody user in your user namespace.
>>>>>
>>>>> We are not using NFS. We are using a shared block storage that offers us snapshots. So provisioning new containers is
>>>>> extremely cheep and fast. Comparing that with untar is comparing a race car with Smart. Yes it can be done and no, I
>>>>> do not believe we should go backwards.
>>>>>
>>>>> We do not share filesystems between containers, we offer them block devices.
>>>>
>>>> Yes, this is a real nuisance for openstack style deployments.
>>>>
>>>> One nice solution to this imo would be a very thin stackable filesystem
>>>> which does uid shifting, or, better yet, a non-stackable way of shifting
>>>> uids at mount.
>>>
>>> I vote for non-stackable way too. Maybe on generic VFS level so that filesystems 
>>> don't bother with it. From what I've seen, even simple stacking is quite a challenge.
>> 
>> Do you have any ideas for how to go about it?  It seems like we'd have
>> to have separate inodes per mapping for each file, which is why of
>> course stacking seems "natural" here.
>
> I was thinking about "lightweight mapping" which is simple shifting. Since
> we're trying to make this co-work with user-ns mappings, simple uid/gid shift
> should be enough. Please, correct me if I'm wrong.
>
> If I'm not, then it looks to be enough to have two per-sb or per-mnt values
> for uid and gid shift. Per-mnt for now looks more promising, since container's
> FS may be just a bind-mount from shared disk.
>
>> Trying to catch the uid/gid at every kernel-userspace crossing seems
>> like a design regression from the current userns approach.  I suppose we
>> could continue in the kuid theme and introduce a iiud/igid for the
>> in-kernel inode uid/gid owners.  Then allow a user privileged in some
>> ns to create a new mount associated with a different mapping for any
>> ids over which he is privileged.
>
> User-space crossing? From my point of view it would be enough if we just turn
> uid/gid read from disk (well, from whenever FS gets them) into uids, that would
> match the user-ns's ones, this sould cover the VFS layer and related syscalls
> only, which is, IIRC stat-s family and chown.
>
> Ouch, and the whole quota engine :\

And posix acls.

But all of this is 90% done already.  I think today we just have
conversions to the initial user namespace. We just need a few tweaks to
allow it and a per superblock user namespace setting.

Eric

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ