[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez3dwtpjpSPKAheWzYcxrLvNeZ1=1OJQoiD3HWfYqs8H4Q@mail.gmail.com>
Date: Fri, 10 Sep 2021 01:13:17 +0200
From: Jann Horn <jannh@...gle.com>
To: Peter Oskolkov <posk@...gle.com>
Cc: Peter Oskolkov <posk@...k.io>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
Andrei Vagin <avagin@...gle.com>,
Thierry Delisle <tdelisle@...terloo.ca>,
Andy Lutomirski <luto@...nel.org>
Subject: Re: [PATCH 2/4 v0.5] sched/umcg: RFC: add userspace atomic helpers
On Fri, Sep 10, 2021 at 12:10 AM Peter Oskolkov <posk@...gle.com> wrote:
> On Thu, Sep 9, 2021 at 2:21 PM Jann Horn <jannh@...gle.com> wrote:
> > > Option 1: as you suggest, pin pages holding struct umcg_task in sys_umcg_ctl;
> >
> > FWIW, there is a variant on this that might also be an option:
> >
> > You can create a new memory mapping from kernel code and stuff pages
> > into it that were originally allocated as normal kernel pages. This is
> > done in a bunch of places, e.g.:
> >
> > This has the advantage that it avoids pinning random pages that were
> > originally allocated from ZONE_MOVABLE blocks. (Or pinning hugepages,
> > or something like that.)
> > The downsides are that it reduces userspace's freedom to place the
> > UAPI structs wherever it wants (so userspace e.g. probably can't
> > directly put the struct in thread-local storage, instead it has to
> > store a pointer to the struct), and that you need to write a bunch of
> > code to create the mapping and allocate slots in these pages for
> > userspace threads.
>
> Thanks again, Jann! Why do you think using custom mapping like this is
> preferable to doing just kzalloc(size, GFP_USER), or maybe
> alloc_page(GFP_USER)?
kzalloc() / alloc_page() just give you kernel memory allocations; but
if you want userspace to be able to directly read/write that memory,
you have to also map the same physical memory into the userspace
pagetables somehow (at a separate address), which requires that you
set up a VMA to tell the MM subsystem that that userspace address
range is in use, and to specify what should happen when userspace
calls memory management syscalls on it, or when pagefaults occur in
it.
Also, when allocating memory that should also be mapped into
userspace, you have to use alloc_page(); memory from kzalloc() (in
other words, slab memory) can't be mapped into userspace. (Technically
it could be mapped into userspace with PFNMAP, but doing that would be
weird.)
> The documentation here
> https://www.kernel.org/doc/html/latest/core-api/memory-allocation.html
> says:
>
> "GFP_USER means that the allocated memory is not movable and it must
> be directly accessible by the kernel", which sounds exactly what we
> need here.
If you look at the actual definitions of GFP_KERNEL and GFP_USER:
#define GFP_KERNEL (__GFP_RECLAIM | __GFP_IO | __GFP_FS)
#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL)
you can see that the only difference between them is the
__GFP_HARDWALL flag, which is documented as follows:
* %__GFP_HARDWALL enforces the cpuset memory allocation policy.
So this basically just determines whether memory allocations fail if
the task's memory allocation policy says they should fail because no
memory is available on the right nodes. The choice of GFP flags
doesn't influence whether userspace ends up getting access to the
allocated memory. (On 32-bit machines, it does influence whether the
kernel can easily access the memory though: Normal userspace anonymous
memory is GFP_HIGHUSER, which includes __GFP_HIGHMEM, meaning that the
returned page doesn't have to be mapped in the kernel's linear
mapping, it can be in "high memory".)
Powered by blists - more mailing lists