[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100915225019.4ca665fc@lilo>
Date: Wed, 15 Sep 2010 22:50:19 +0930
From: Christopher Yeoh <cyeoh@....ibm.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>, linux-mm@...ck.org
Subject: Re: [RFC][PATCH] Cross Memory Attach
On Wed, 15 Sep 2010 10:02:35 +0200
Ingo Molnar <mingo@...e.hu> wrote:
>
> What did those OpenMPI facilities use before your patch - shared
> memory or sockets?
This comparison is against OpenMPI using the shared memory btl.
> I have an observation about the interface:
>
> A small detail: 'int flags' should probably be 'unsigned long flags'
> - it leaves more space.
ok.
> Also, note that there is a further performance optimization possible
> here: if the other task's ->mm is the same as this task's (they share
> the MM), then the copy can be done straight in this process context,
> without GUP. User-space might not necessarily be aware of this so it
> might make sense to express this special case in the kernel too.
ok.
> More fundamentally, wouldnt it make sense to create an iovec
> interface here? If the Gather(v) / Scatter(v) / AlltoAll(v) workloads
> have any fragmentation on the user-space buffer side then the copy of
> multiple areas could be done in a single syscall. (the MM lock has to
> be touched only once, target task only be looked up only once, etc.)
yes, I think so. Currently where I'm using the interface in OpenMPI I
can't take advantage of this, but it could be changed in the future- and
its likely other MPI's could take advantage of it already.
> Plus, a small naming detail, shouldnt the naming be more IO like:
>
> sys_process_vm_read()
> sys_process_vm_write()
Yes, that looks better to me. I really wasn't sure how to name them.
Regards,
Chris
--
cyeoh@...ibm.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists