[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez0jfsS=gKN0Vo_VS2EvvMBvEr+QNz0vDKPeSAzsrsRwPQ@mail.gmail.com>
Date: Wed, 14 Apr 2021 08:46:40 +0200
From: Jann Horn <jannh@...gle.com>
To: Andrei Vagin <avagin@...il.com>
Cc: kernel list <linux-kernel@...r.kernel.org>,
Linux API <linux-api@...r.kernel.org>,
linux-um@...ts.infradead.org, criu@...nvz.org, avagin@...gle.com,
Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Anton Ivanov <anton.ivanov@...bridgegreys.com>,
Christian Brauner <christian.brauner@...ntu.com>,
Dmitry Safonov <0x7f454c46@...il.com>,
Ingo Molnar <mingo@...hat.com>, Jeff Dike <jdike@...toit.com>,
Mike Rapoport <rppt@...ux.ibm.com>,
Michael Kerrisk <mtk.manpages@...il.com>,
Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Richard Weinberger <richard@....at>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 0/4 POC] Allow executing code and syscalls in another
address space
On Wed, Apr 14, 2021 at 7:59 AM Andrei Vagin <avagin@...il.com> wrote:
> We already have process_vm_readv and process_vm_writev to read and write
> to a process memory faster than we can do this with ptrace. And now it
> is time for process_vm_exec that allows executing code in an address
> space of another process. We can do this with ptrace but it is much
> slower.
>
> = Use-cases =
It seems to me like your proposed API doesn't really fit either one of
those usecases well...
> Here are two known use-cases. The first one is “application kernel”
> sandboxes like User-mode Linux and gVisor. In this case, we have a
> process that runs the sandbox kernel and a set of stub processes that
> are used to manage guest address spaces. Guest code is executed in the
> context of stub processes but all system calls are intercepted and
> handled in the sandbox kernel. Right now, these sort of sandboxes use
> PTRACE_SYSEMU to trap system calls, but the process_vm_exec can
> significantly speed them up.
In this case, since you really only want an mm_struct to run code
under, it seems weird to create a whole task with its own PID and so
on. It seems to me like something similar to the /dev/kvm API would be
more appropriate here? Implementation options that I see for that
would be:
1. mm_struct-based:
a set of syscalls to create a new mm_struct,
change memory mappings under that mm_struct, and switch to it
2. pagetable-mirroring-based:
like /dev/kvm, an API to create a new pagetable, mirror parts of
the mm_struct's pagetables over into it with modified permissions
(like KVM_SET_USER_MEMORY_REGION),
and run code under that context.
page fault handling would first handle the fault against mm->pgd
as normal, then mirror the PTE over into the secondary pagetables.
invalidation could be handled with MMU notifiers.
> Another use-case is CRIU (Checkpoint/Restore in User-space). Several
> process properties can be received only from the process itself. Right
> now, we use a parasite code that is injected into the process. We do
> this with ptrace but it is slow, unsafe, and tricky.
But this API will only let you run code under the *mm* of the target
process, not fully in the context of a target *task*, right? So you
still won't be able to use this for accessing anything other than
memory? That doesn't seem very generically useful to me.
Also, I don't doubt that anything involving ptrace is kinda tricky,
but it would be nice to have some more detail on what exactly makes
this slow, unsafe and tricky. Are there API additions for ptrace that
would make this work better? I imagine you're thinking of things like
an API for injecting a syscall into the target process without having
to first somehow find an existing SYSCALL instruction in the target
process?
> process_vm_exec can
> simplify the process of injecting a parasite code and it will allow
> pre-dump memory without stopping processes. The pre-dump here is when we
> enable a memory tracker and dump the memory while a process is continue
> running. On each interaction we dump memory that has been changed from
> the previous iteration. In the final step, we will stop processes and
> dump their full state. Right now the most effective way to dump process
> memory is to create a set of pipes and splice memory into these pipes
> from the parasite code. With process_vm_exec, we will be able to call
> vmsplice directly. It means that we will not need to stop a process to
> inject the parasite code.
Alternatively you could add splice support to /proc/$pid/mem or add a
syscall similar to process_vm_readv() that splices into a pipe, right?
Powered by blists - more mailing lists