lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150123233026.GP16552@dastard>
Date:	Sat, 24 Jan 2015 10:30:26 +1100
From:	Dave Chinner <david@...morbit.com>
To:	Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
Cc:	Li Xi <pkuelelixi@...il.com>, linux-fsdevel@...r.kernel.org,
	linux-ext4@...r.kernel.org, linux-api@...r.kernel.org,
	tytso@....edu, adilger@...ger.ca, jack@...e.cz,
	viro@...iv.linux.org.uk, hch@...radead.org, dmonakhov@...nvz.org,
	"Eric W. Biederman" <ebiederm@...ssion.com>
Subject: Re: [v8 4/5] ext4: adds FS_IOC_FSSETXATTR/FS_IOC_FSGETXATTR
 interface support

On Fri, Jan 23, 2015 at 02:58:09PM +0300, Konstantin Khlebnikov wrote:
> On 23.01.2015 04:53, Dave Chinner wrote:
> >On Thu, Jan 22, 2015 at 06:28:51PM +0300, Konstantin Khlebnikov wrote:
> >>>+	kprojid = make_kprojid(&init_user_ns, (projid_t)projid);
> >>
> >>Maybe current_user_ns()?
> >>This code should be user-namespace aware from the beginning.
> >
> >No, the code is correct. Project quotas have nothing to do with
> >UIDs and so should never have been included in the uid/gid
> >namespace mapping infrastructure in the first place.
> 
> Right, but user-namespace provides id mapping for project-id too.
> This infrastructure adds support for nested project quotas with
> virtualized ids in sub-containers. I couldn't say that this is
> must have feature but implementation is trivial because whole
> infrastructure is already here.

This is an extremely common misunderstanding of project IDs. Project
IDs are completely separate to the UID/GID namespace.  Project
quotas were originally designed specifically for
accounting/enforcing quotas in situations where uid/gid
accounting/enforcing is not possible. This design intent goes back
25 years - it predates XFS...

IOWs, mapping prids via user namespaces defeats the purpose
for which prids were originally intended for.

> >Point in case: directory subtree quotas can be used as a resource
> >controller for limiting space usage within separate containers that
> >share the same underlying (large) filesystem via mount namespaces.
> 
> That's exactly my use-case: 'sub-volumes' for containers with
> quota for space usage/inodes count.

That doesn't require mapped project IDs. Hard container space limits
can only be controlled by the init namespace, and because inodes can
hold only one project ID the current ns cannot be allowed to change
the project ID on the inode because that allows them to escape the
resource limits set on the project ID associated with the sub-mount
set up by the init namespace...

i.e.

/mnt			prid = 0, default for entire fs.
/mnt/container1/	prid = 1, inherit, 10GB space limit
/mnt/container2/	prid = 2, inherit, 50GB space limit
.....
/mnt/containerN/	prid = N, inherit, 20GB space limit

And you clone the mount namespace for each container so the root is
at the appropriate /mnt/containerX/.  Now the containers have a
fixed amount of space they can use in the parent filesystem they
know nothing about, and it is enforced by directory subquotas
controlled by the init namespace.  This "fixed amount of space" is
reflected in the container namespace when "df" is run as it will
report the project quota space limits. Adding or removing space to a
container is as simple as changing the project quota limits from the
init namespace. i.e. an admin operation controlled by the host, not
the container....

Allowing the container to modify the prid and/or the inherit bit of
inodes in it's namespace then means the user can define their own
space usage limits, even turn them off. It's not a resource
container at that point because the user can define their own
limits.  Hence, only if the current_ns cannot change project quotas
will we have a hard fence on space usage that the container *cannot
exceed*.

Yes, I know there are other use cases for project quotas *within* a
container as controlled by the user (same as existing project quota
usages), but we don't have the capability of storing multiple
project IDs on each inode, nor accounting/enforcement across
multiple project IDs on an inode. Nor, really, do we want to (on
disk format changes required) and hence we can have one or the
other but not both.

Further, in a containerised system, providing the admin with a
trivial and easy to manage mechanism to provide hard limits on
shared filesystem space usage of each container is far more
important than catering to the occasional user who might have a need
for project quotas inside a container.

These are the points I brought up when I initially reviewed the user
namespace patches - the userns developer ignored my concerns and the
code was merged without acknowledging them, let alone addressing
them.

As we (the XFS guys) have no way of knowing when such a distinction
should be made, and with the user ns developers being completely
unresponsive on the subject, we made the decision ourselves.  Our
only concern was to be consistent, safe and predictable and that
means we choose to only allow project quotas to be used as an
external container resource hardwall limit and hence *never* allow
access to project quotas inside container namespaces.

That's the long and the short of it. project IDs are independent of
user IDs and they cannot be sanely used both inside and outside user
namespaces at the same time. Hence they should never have been
included in the user namespace mappings in the first place.

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ