lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100915154458.GE3013@sgi.com>
Date:	Wed, 15 Sep 2010 10:44:58 -0500
From:	Robin Holt <holt@....com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Christopher Yeoh <cyeoh@....ibm.com>, Avi Kivity <avi@...hat.com>,
	linux-kernel@...r.kernel.org,
	Linux Memory Management List <linux-mm@...ck.org>,
	Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC][PATCH] Cross Memory Attach

> > 3. ability to map part of another process's address space directly into
> >   the current one. Would have setup/tear down overhead, but this would
> >   be useful specifically for reduction operations where we don't even
> >   need to really copy the data once at all, but use it directly in
> >   arithmetic/logical operations on the receiver.
> 
> Don't even think about this. If you want to map another tasks memory,
> use shared memory. The shared memory code knows about that. The races
> for anything else are crazy.

SGI has a similar, but significantly more difficult, problem to solve and
have written a fairly complex driver to handle exactly the scenario IBM
is proposing.  In our case, not only are we trying to directly access one
processes memory, we are doing it from a completely different operating
system instance operating on the same numa fabric.

In our case (I have not looked at IBMs patch), we are actually using
get_user_pages() to get extra references on struct pages.  We are
judicious about reference counting the mm and we use get_task_mm in all
places with the exception of process teardown (ignorable detail for now).
We have a fault handler inserting PFNs as appropriate.  You can guess
at the complexity.  Even with all its complexity, we still need to
caveat certain functionality as not being supported.

If we were to try and get that driver included in the kernel, how would
you suggest we expand the shared memory code to include support for the
coordination needed between those seperate operating system instances?
I am genuinely interested and not trying to be argumentative.  This has
been on my "Get done before Aug-1 list for months and I have not had
any time to pursue.

Thanks,
Robin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ