lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0802131052360.18472@schroedinger.engr.sgi.com>
Date:	Wed, 13 Feb 2008 11:00:05 -0800 (PST)
From:	Christoph Lameter <clameter@....com>
To:	Christian Bell <christian.bell@...gic.com>
cc:	Jason Gunthorpe <jgunthorpe@...idianresearch.com>,
	Rik van Riel <riel@...hat.com>,
	Andrea Arcangeli <andrea@...ranet.com>, a.p.zijlstra@...llo.nl,
	izike@...ranet.com, Roland Dreier <rdreier@...co.com>,
	steiner@....com, linux-kernel@...r.kernel.org, avi@...ranet.com,
	linux-mm@...ck.org, daniel.blueman@...drics.com,
	Robin Holt <holt@....com>, general@...ts.openfabrics.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	kvm-devel@...ts.sourceforge.net
Subject: Re: [ofa-general] Re: Demand paging for memory regions

On Tue, 12 Feb 2008, Christian Bell wrote:

> You're arguing that a HW page table is not needed by describing a use
> case that is essentially what all RDMA solutions already do above the
> wire protocols (all solutions except Quadrics, of course).

The HW page table is not essential to the notification scheme. That the 
RDMA uses the page table for linearization is another issue. A chip could 
just have a TLB cache and lookup the entries using the OS page table f.e.

> > Lets say you have a two systems A and B. Each has their memory region MemA 
> > and MemB. Each side also has page tables for this region PtA and PtB.
> > If either side then accesses the page again then the reverse process 
> > happens. If B accesses the page then it wil first of all incur a page 
> > fault because the entry in PtB is missing. The fault will then cause a 
> > message to be send to A to establish the page again. A will create an 
> > entry in PtA and will then confirm to B that the page was established. At 
> > that point RDMA operations can occur again.
> 
> The notifier-reclaim cycle you describe is akin to the out-of-band
> pin-unpin control messages used by existing communication libraries.
> Also, I think what you are proposing can have problems at scale -- A
> must keep track of all of the (potentially many systems) of memA and
> cooperatively get an agreement from all these systems before reclaiming
> the page.

Right. We (SGI) have done something like this for a long time with XPmem 
and it scales ok.

> When messages are sufficiently large, the control messaging necessary
> to setup/teardown the regions is relatively small.  This is not
> always the case however -- in programming models that employ smaller
> messages, the one-sided nature of RDMA is the most attractive part of
> it.  

The messaging would only be needed if a process comes under memory 
pressure. As long as there is enough memory nothing like this will occur.

> Nothing any communication/runtime system can't already do today.  The
> point of RDMA demand paging is enabling the possibility of using RDMA
> without the implied synchronization -- the optimistic part.  Using
> the notifiers to duplicate existing memory region handling for RDMA
> hardware that doesn't have HW page tables is possible but undermines
> the more important consumer of your patches in my opinion.

The notifier schemet should integrate into existing memory region 
handling and not cause a duplication. If you already have library layers 
that do this then it should be possible to integrate it.

> One other area that has not been brought up yet (I think) is the
> applicability of notifiers in letting users know when pinned memory
> is reclaimed by the kernel.  This is useful when a lower-level
> library employs lazy deregistration strategies on memory regions that
> are subsequently released to the kernel via the application's use of
> munmap or sbrk.  Ohio Supercomputing Center has work in this area but
> a generalized approach in the kernel would certainly be welcome.

The driver gets the notifications about memory being reclaimed. The driver 
could then notify user code about the release as well.

Pinned memory current *cannot* be reclaimed by the kernel. The refcount is 
elevated. This means that the VM tries to remove the mappings and then 
sees that it was not able to remove all references. Then it gives up and 
tries again and again and again.... Thus the potential for livelock.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ