[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZV5/ilfUoqC2PW0D@memverge.com>
Date: Wed, 22 Nov 2023 17:24:10 -0500
From: Gregory Price <gregory.price@...verge.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Gregory Price <gourry.memverge@...il.com>, linux-mm@...ck.org,
linux-doc@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-api@...r.kernel.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org, arnd@...db.de, tglx@...utronix.de,
luto@...nel.org, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com,
mhocko@...nel.org, tj@...nel.org, ying.huang@...el.com
Subject: Re: [RFC PATCH 00/11] mm/mempolicy: Make task->mempolicy externally
modifiable via syscall and procfs
On Wed, Nov 22, 2023 at 01:33:48PM -0800, Andrew Morton wrote:
> On Wed, 22 Nov 2023 16:11:49 -0500 Gregory Price <gourry.memverge@...il.com> wrote:
>
> > The patch set changes task->mempolicy to be modifiable by tasks other
> > than just current.
> >
> > The ultimate goal is to make mempolicy more flexible and extensible,
> > such as adding interleave weights (which may need to change at runtime
> > due to hotplug events). Making mempolicy externally modifiable allows
> > for userland daemons to make runtime performance adjustments to running
> > tasks without that software needing to be made numa-aware.
>
> Please add to this [0/N] a full description of the security aspect: who
> can modify whose mempolicy, along with a full description of the
> reasoning behind this decision.
>
Will do. For the sake of v0 for now:
1) the task itself (task == current)
for obvious reasons: it already can
2) from external interfaces: CAP_SYS_NICE
There might be an argument for CAP_SYS_ADMIN, but CAP_SYS_NICE has
access to scheduling controls, and mbind uses CAP_SYS_NICE to validate
whether shared pages can be migrated. The same is true of migrate_pages
and other memory management controls. For this reason, I chose to gate
the task syscalls behind CAP_SYS_NICE unless (task == current).
I'm by no means an expert in this area, so slap away if i'm egregiously
wrong here.
I will add additional security context to v2 as to what impacts changing
a mempolicy can have at runtime. This will mostly be related to cpusets
implications, as mempolicy itself is not a "constraining" interface in
terms of security. For example: one can mbind/interleave/whatever a set
of nodes, and then use migrate_pages or move_pages to violate that
mempolicy. This is explicitly allowed and discussed in the
implementation of the existing syscalls / libnuma.
However, if cpusets must be respected.
This is why i refactored out replace_mempolicy and reused it, because
this enforcement is already handled by checking task->mems_allowed.
> > 3. Add external interfaces which allow for a task mempolicy to be
> > modified by another task. This is implemented in 4 syscalls
> > and a procfs interface:
> > sys_set_task_mempolicy
> > sys_get_task_mempolicy
> > sys_set_task_mempolicy_home_node
> > sys_task_mbind
> > /proc/[pid]/mempolicy
>
> Why is the procfs interface needed? Doesn't it simply duplicate the
> syscall interface? Please update [0/N] with a description of this
> decision.
>
Honestly I wrote the procfs interface first, and then came back around
to just implement the syscalls. mbind is not friendly to being procfs'd
so if the preference is to have only one, not both, then it should
probably be the syscalls.
That said, when I introduce weighted interleave on top of this, having a
simple procfs interface to those weights would be valuable, so I
imagined something like `proc/mempolicy` to determine if interleave was
being used and something like `proc/mpol_interleave_weights` for a clean
interface to update weights.
However, in the same breath, I have a prior RFC with set/get_mempolicy2
which could probably take all future mempolicy extensions and wrap them
up into one pair of syscalls, instead of us ending up with 200 more
sys_mempolicy_whatever as memory attached fabrics become more common.
So... yeah... the is one area I think the community very much needs to
comment: set/get_mempolicy2, many new mempolicy syscalls, procfs? All
of the above?
The procfs route provides a command-line user a nice, clean way to
update policies without the need for an additional tool, but if there is
an "all or nothing" preference on mempolicy controls - then procfs is
probably not the way to go.
This RFC at least shows there are options. I very much welcome input in
this particular area.
> > The new syscalls are the same as their current-task counterparts,
> > except that they take a pid as an argument. The exception is
> > task_mbind, which required a new struct due to the number of args.
> >
> > The /proc/pid/mempolicy re-uses the interface mpol_parse_str format
> > to enable get/set of mempolicy via procsfs.
> >
> > mpol_parse_str format:
> > <mode>[=<flags>][:<nodelist>]
> >
> > Example usage:
> >
> > echo "default" > /proc/pid/mempolicy
> > echo "prefer=relative:0" > /proc/pid/mempolicy
> > echo "interleave:0-3" > /proc/pid/mempolicy
>
> What do we get when we read from this? Please add to changelog.
>
> Also a description of the permissions for this procfs file, along with
> reasoning. If it has global readability, and there's something
> interesting in there, let's show that the security implications have
> been fully considered.
>
Ah, should have included that. Will add. For the sake of v0:
Current permissions: (S_IRUSR|S_IWUSR)
Which presumes the owner and obviosly root. Tried parity the syscall.
the total set of (current) policy outputs are:
"default"
"local"
"prefer:node"
"prefer=static:node"
"prefer=relative:node"
"prefer (many):nodelist"
"prefer (many)=static:nodelist"
"prefer (many)=relative:nodelist"
"interleave:nodelist"
"interleave=static:nodelist"
"interleave=relative:nodelist"
"bind:nodelist"
"bind=static:nodelist"
"bind=relative:nodelist"
There doesn't seem to be much of a security implication here, at least
not anything that can't already be gleaned via something like numa_maps,
but it does provide *some* level of memory placement imformation, so
it's still probably best gated behind owner/root.
That said, changing this policy may not imply it is actually used,
because individual VMA policies can override this policy. So it really
doesn't provide much info at all.
Something I just noticed: mpol_parse_str does not presently support the
numa balancing flag, so that would have to be added to achieve parity
with the set_mempolicy syscall.
> > Changing the mempolicy does not induce memory migrations via the
> > procfs interface (which is the exact same behavior as set_mempolicy).
> >
>
Thanks for taking a quick look!
~Gregory
Powered by blists - more mailing lists