[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <B748ED06-77CC-47F6-AA5C-0D9E2AD1BDB2@ornl.gov>
Date: Wed, 28 Feb 2018 23:12:03 +0000
From: "Atchley, Scott" <atchleyes@...l.gov>
To: Open MPI Developers <devel@...ts.open-mpi.org>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
"criu@...nvz.org" <criu@...nvz.org>,
"gdb@...rceware.org" <gdb@...rceware.org>,
"rr-dev@...illa.org" <rr-dev@...illa.org>,
Arnd Bergmann <arnd@...db.de>,
Michael Kerrisk <mtk.manpages@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Josh Triplett <josh@...htriplett.org>,
Jann Horn <jannh@...gle.com>,
Greg KH <gregkh@...uxfoundation.org>,
Andrei Vagin <avagin@...nvz.org>
Subject: Re: [OMPI devel] [PATCH v5 0/4] vm: add a syscall to map a process
memory into a pipe
> On Feb 28, 2018, at 2:12 AM, Pavel Emelyanov <xemul@...tuozzo.com> wrote:
>
> On 02/27/2018 05:18 AM, Dmitry V. Levin wrote:
>> On Mon, Feb 26, 2018 at 12:02:25PM +0300, Pavel Emelyanov wrote:
>>> On 02/21/2018 03:44 AM, Andrew Morton wrote:
>>>> On Tue, 9 Jan 2018 08:30:49 +0200 Mike Rapoport <rppt@...ux.vnet.ibm.com> wrote:
>>>>
>>>>> This patches introduces new process_vmsplice system call that combines
>>>>> functionality of process_vm_read and vmsplice.
>>>>
>>>> All seems fairly strightforward. The big question is: do we know that
>>>> people will actually use this, and get sufficient value from it to
>>>> justify its addition?
>>>
>>> Yes, that's what bothers us a lot too :) I've tried to start with finding out if anyone
>>> used the sys_read/write_process_vm() calls, but failed :( Does anybody know how popular
>>> these syscalls are?
>>
>> Well, process_vm_readv itself is quite popular, it's used by debuggers nowadays,
>> see e.g.
>> $ strace -qq -esignal=none -eprocess_vm_readv strace -qq -o/dev/null cat /dev/null
>
> I see. Well, yes, this use-case will not benefit much from remote splice. How about more
> interactive debug by, say, gdb? It may attach, then splice all the memory, then analyze
> the victim code/data w/o copying it to its address space?
>
> -- Pavel
I may be completely off base, but could a FUSE daemon use this to read memory from the client and dump it to a file descriptor without copying the data into the kernel?
Powered by blists - more mailing lists