[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100916104819.36d10acb@lilo>
Date: Thu, 16 Sep 2010 10:48:19 +0930
From: Christopher Yeoh <cyeoh@....ibm.com>
To: Bryan Donlan <bdonlan@...il.com>
Cc: Avi Kivity <avi@...hat.com>, linux-kernel@...r.kernel.org,
Linux Memory Management List <linux-mm@...ck.org>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC][PATCH] Cross Memory Attach
On Wed, 15 Sep 2010 23:46:09 +0900
Bryan Donlan <bdonlan@...il.com> wrote:
> On Wed, Sep 15, 2010 at 19:58, Avi Kivity <avi@...hat.com> wrote:
>
> > Instead of those two syscalls, how about a vmfd(pid_t pid, ulong
> > start, ulong len) system call which returns an file descriptor that
> > represents a portion of the process address space. You can then
> > use preadv() and pwritev() to copy memory, and
> > io_submit(IO_CMD_PREADV) and io_submit(IO_CMD_PWRITEV) for
> > asynchronous variants (especially useful with a dma engine, since
> > that adds latency).
> >
> > With some care (and use of mmu_notifiers) you can even mmap() your
> > vmfd and access remote process memory directly.
>
> Rather than introducing a new vmfd() API for this, why not just add
> implementations for these more efficient operations to the existing
> /proc/$pid/mem interface?
Perhaps I'm misunderstanding something here, but
accessing /proc/$pid/mem requires ptracing the target process.
We can't really have all these MPI processes ptraceing each other
just to send/receive a message....
Regards,
Chris
--
cyeoh@...ibm.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists