[<prev] [next>] [day] [month] [year] [list]
Message-ID: <5385E153.6030607@1h.com>
Date: Wed, 28 May 2014 16:14:59 +0300
From: Marian Marinov <mm@...com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: [RFC] Per user namespace process accounting
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello,
I have the following proposition.
Number of currently running processes is accounted at the root user namespace. The problem I'm facing is that multiple
containers in different user namespaces share the process counters.
So if containerX runs 100 with UID 99, containerY should have NPROC limit of above 100 in order to execute any
processes with ist own UID 99.
I know that some of you will tell me that I should not provision all of my containers with the same UID/GID maps, but
this brings another problem.
We are provisioning the containers from a template. The template has a lot of files 500k and more. And chowning these
causes a lot of I/O and also slows down provisioning considerably.
The other problem is that when we migrate one container from one host machine to another the IDs may be already in use
on the new machine and we need to chown all the files again.
Finally if we use different UID/GID maps we can not do live migration to another node because the UIDs may be already
in use.
So I'm proposing one hack modifying unshare_userns() to allocate new user_struct for the cred that is created for the
first task creating the user_ns and free it in exit_creds().
Can you please comment on that?
Or suggest a better solution?
Best regards,
Marian
- --
Marian Marinov
Founder & CEO of 1H Ltd.
Jabber/GTalk: hackman@...ber.org
ICQ: 7556201
Mobile: +359 886 660 270
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iEYEARECAAYFAlOF4VMACgkQ4mt9JeIbjJSFnwCgrAgCDvBs5ue0BPUHmach04Oo
xgMAn34PJbuVCBwh/RqUumFP05fvMHHy
=5J9M
-----END PGP SIGNATURE-----
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists