[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C91E2CC.9040709@redhat.com>
Date: Thu, 16 Sep 2010 11:26:36 +0200
From: Avi Kivity <avi@...hat.com>
To: Christopher Yeoh <cyeoh@....ibm.com>
CC: Bryan Donlan <bdonlan@...il.com>, linux-kernel@...r.kernel.org,
Linux Memory Management List <linux-mm@...ck.org>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC][PATCH] Cross Memory Attach
On 09/16/2010 03:18 AM, Christopher Yeoh wrote:
> On Wed, 15 Sep 2010 23:46:09 +0900
> Bryan Donlan<bdonlan@...il.com> wrote:
>
> > On Wed, Sep 15, 2010 at 19:58, Avi Kivity<avi@...hat.com> wrote:
> >
> > > Instead of those two syscalls, how about a vmfd(pid_t pid, ulong
> > > start, ulong len) system call which returns an file descriptor that
> > > represents a portion of the process address space. You can then
> > > use preadv() and pwritev() to copy memory, and
> > > io_submit(IO_CMD_PREADV) and io_submit(IO_CMD_PWRITEV) for
> > > asynchronous variants (especially useful with a dma engine, since
> > > that adds latency).
> > >
> > > With some care (and use of mmu_notifiers) you can even mmap() your
> > > vmfd and access remote process memory directly.
> >
> > Rather than introducing a new vmfd() API for this, why not just add
> > implementations for these more efficient operations to the existing
> > /proc/$pid/mem interface?
>
> Perhaps I'm misunderstanding something here, but
> accessing /proc/$pid/mem requires ptracing the target process.
> We can't really have all these MPI processes ptraceing each other
> just to send/receive a message....
>
You could have each process open /proc/self/mem and pass the fd using
SCM_RIGHTS.
That eliminates a race; with copy_to_process(), by the time the pid is
looked up it might designate a different process.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists