lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080917202102.GA166524@sgi.com>
Date:	Wed, 17 Sep 2008 15:21:02 -0500
From:	Jack Steiner <steiner@....com>
To:	"H. Peter Anvin" <hpa@...or.com>
Cc:	Dean Nelson <dcn@....com>, Ingo Molnar <mingo@...e.hu>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	Alan Mayer <ajm@....com>, jeremy@...p.org,
	rusty@...tcorp.com.au, suresh.b.siddha@...el.com,
	torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
	Thomas Gleixner <tglx@...utronix.de>,
	Yinghai Lu <Yinghai.lu@....com>
Subject: Re: [RFC 0/4] dynamically allocate arch specific system vectors

On Wed, Sep 17, 2008 at 12:15:42PM -0700, H. Peter Anvin wrote:
> Dean Nelson wrote:
> >
> >    sgi-gru driver
> >
> >The GRU is not an actual external device that is connected to an IOAPIC.
> >The gru is a hardware mechanism that is embedded in the node controller
> >(UV hub) that directly connects to the cpu socket. Any cpu (with 
> >permission)
> >can do direct loads and stores to the gru. Some of these stores will result
> >in an interrupt being sent back to the cpu that did the store.
> >
> >The interrupt vector used for this interrupt is not in an IOAPIC. Instead
> >it must be loaded into the GRU at boot or driver initialization time.
> >
> 
> Could you clarify there: is this one vector number per CPU, or are you 
> issuing a specific vector number and just varying the CPU number?

It is one vector for each cpu.

It is more efficient for software if the vector # is the same for all cpus
but the software/hardware can support a unique vector for each cpu. This
assumes, of course, that the driver can determine the irq->vector mapping for
each cpu.


<probably-more-detail-than-you-want>

Physically, the system contains a large number of blades. Each blade has
several processor sockets plus a UV hub (node controller).  There are 2 GRUs
located in each UV hub.

Each GRU supports multiple users simultaneously using the GRU.
Each user is assigned a context number (0 .. N-1). If an exception occurs,
the GRU uses the context number as an index into an array of [vector-apicid] pairs.
The [vector-apicid] identifies the cpu & vector for the interrupt.

Although supported by hardware, we do not intend to send interrupts
off-blade.

The array of [vector-apicid] pairs is located in each GRU and must be
initialized at boot time or when the driver is loaded. There is a
separate array for each GRU.

When the driver receives the interrupt, the vector number (or IRQ number) is
used by the driver to determine the GRU that sent the interrupt.


The simpliest scheme would be to assign 2 vectors - one for each GRU in the UV hub.
Vector #0 would be loaded into each "vector" of the  [vector-apicid] array for GRU
#0; vector #1 would be loaded into the [vector-apicid] array for GRU #1.

The [vector-apicid] arrays on all nodes would be identical as far as vectors are
concerned. (Apicids would be different and would target blade-local cpus).
Since interrupts are not sent offnode, the driver can use the vector (irq)
to uniquely identify the source of the interrupt.

However, we have a lot of flexibilty here. Any scheme that provides the right
information to the driver is ok. Note that servicing of these interrupts
is likely to be time critical. We need this path to be as efficient as possible.



--- jack




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ