lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20080612205232.GB17826@sgi.com>
Date:	Thu, 12 Jun 2008 15:52:32 -0500
From:	Jack Steiner <steiner@....com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	tglx@...utronix.de, holt@....com, andrea@...ranet.com,
	"David S. Miller" <davem@...emloft.net>
Subject: Re: [patch 00/11] GRU Driver

On Thu, Jun 12, 2008 at 11:03:36AM -0700, Andrew Morton wrote:
> On Thu, 12 Jun 2008 09:05:09 -0500 Jack Steiner <steiner@....com> wrote:
> 
> > On Thu, Jun 12, 2008 at 03:27:00PM +0200, Ingo Molnar wrote:
> > > 
> > > * steiner@....com <steiner@....com> wrote:
> > > 
> > > > This series of patches adds a driver for the SGI UV GRU. The driver is 
> > > > still in development but it currently compiles for both x86_64 & IA64. 
> > > > All simple regression tests pass on IA64. Although features remain to 
> > > > be added, I'd like to start the process of getting the driver into the 
> > > > kernel. Additional kernel drivers will depend on services provide by 
> > > > the GRU driver.
> > > > 
> > > > The GRU is a hardware resource located in the system chipset. The GRU 
> > > > contains memory that is mmaped into the user address space. This 
> > > > memory is used to communicate with the GRU to perform functions such 
> > > > as load/store, scatter/gather, bcopy, AMOs, etc.  The GRU is directly 
> > > > accessed by user instructions using user virtual addresses. GRU 
> > > > instructions (ex., bcopy) use user virtual addresses for operands.
> > > 
> > > did i get it right that it's basically a fast, hardware based message 
> > > passing interface that allows two tasks to communicate via DMA and 
> > > interrupts, without holding up the CPU? 
> > 
> > Yes
> > 
> > 
> > > If that is the case, wouldnt the 
> > > proper support model be a network driver, instead of these special 
> > > ioctls. (a network driver with no checksumming, with scatter-gather, 
> > > zero-copy and TSO support, etc.)
> > > 
> > > or a filesystem. Anything but special-purpose ioctls ...
> > 
> > The ioctls are not used directly by users.
> > 
> > Users function the GRU by directly writing to the memory that is mmaped into
> > GRU space, ie; load/store directly to GRU space. The ioctls are used
> > infrequently by libgru.so to configure the driver during user initialization
> > and to handle errors that may occur.
> > 
> > For example, here is the code that is required to issue a GRU
> > instruction & wait for completion:
> > 
> 
> But could/should it be implemented as (say) a net driver?

I don't think so.

The GRU driver is not primarily a point-to-point communication engine. The
most common use of the GRU is by a single process, or possibly an OpemMP/MPI
application.  There is typically no end-to-end communication or RDMA
involved.  All data transfer takes place between blocks of cacheable memory
that are resident in the process address space.  There is nothing in the GRU
or GRU libraries that does anything equivalent to connection establishment
between different processes.

Applications on large NUMA systems use the GRU to access data that is
located on memory within the process address space but located on remote
nodes. For example, the GRU can pull large blocks of data from a remote node
to the local node asynchronously. Other GRU instructions provide
scatter/gather, AMOs, etc. but always operating on memory within the
existing process address space.

The one place where there is process-to-process communication is between MPI
processes. However, separate from the GRU, the MPI processes have to memory
map a common block of memory into the address spaces of both processes.
Nothing in the GRU or GRU library is aware that interprocess communication
is taking place.


The GRU hardware is the next generation of what SN2 refers to the "mspec"
driver (see drivers/char/mspec.c). The GRU is much more complicated but it
provide a similar capability - mmaping of special memory into the user
address space.

>From a user standpoint, the user simply mmaps a chunk of GRU memory into the
user address space, then does loads & stores to the GRU memory to issue GRU
instructions to do data transfers. The user could also do the same data
transfers using processor load/store instructions but at a slower (we hope)
rate.


--- jack
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ