lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080916204654.GA3532@sgi.com>
Date:	Tue, 16 Sep 2008 15:46:54 -0500
From:	Dean Nelson <dcn@....com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	"Eric W. Biederman" <ebiederm@...ssion.com>,
	Alan Mayer <ajm@....com>, jeremy@...p.org,
	rusty@...tcorp.com.au, suresh.b.siddha@...el.com,
	torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
	"H. Peter Anvin" <hpa@...or.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Yinghai Lu <Yinghai.lu@....com>
Subject: Re: [RFC 0/4] dynamically allocate arch specific system vectors

On Tue, Sep 16, 2008 at 10:24:48AM +0200, Ingo Molnar wrote:
> 
> * Dean Nelson <dcn@....com> wrote:
> 
> > > while i understand the UV_BAU_MESSAGE case (TLB flushes are 
> > > special), why does sgi-gru and sgi-xp need to go that deep? They are 
> > > drivers, they should be able to make use of an ordinary irq just 
> > > like the other 2000 drivers we have do.
> > 
> > The sgi-gru driver needs to be able to allocate a single irq/vector 
> > pair for all CPUs even those that are not currently online. The sgi-xp 
> > driver has similar but not as stringent needs.
> 
> why does it need to allocate a single irq/vector pair? Why is a regular 
> interrupt not good?

When you speak of a 'regular interrupt' I assume you are referring to simply
the irq number, with the knowledge of what vector and CPU(s) it is mapped to
being hidden?


    sgi-gru driver

The GRU is not an actual external device that is connected to an IOAPIC.
The gru is a hardware mechanism that is embedded in the node controller
(UV hub) that directly connects to the cpu socket. Any cpu (with permission)
can do direct loads and stores to the gru. Some of these stores will result
in an interrupt being sent back to the cpu that did the store.

The interrupt vector used for this interrupt is not in an IOAPIC. Instead
it must be loaded into the GRU at boot or driver initialization time.

The OS needs to route these interrupts back to the GRU driver interrupt
handler on the cpu that received the interrupt. Also, this is a performance
critical path. There should be no globally shared cachelines involved in the
routing.

The actual vector associated with the IRQ does not matter as long as it is
a relatively high priority interrupt. The vector does need to be mapped to
all of the possible CPUs in the partition. The GRU driver needs to know
vector's value, so that it can load it into the GRU.

    sgi-xp driver

The sgi-xp driver utilizes the node controller's message queue capability to
send messages from one system partition (a single SSI) to another partition.

A message queue can be configured to have the node controller raise an
interrupt whenever a message is written into it. This configuration is
accomplished by setting up two processor writable MMRs located in the
node controller. The vector number and apicid of the targeted CPU need
to be written into one of these MMRs. There is no IOAPIC associated with
this.

So one thought was that, once insmod'd, sgi-xp would allocate a message queue,
allocate an irq/vector pair for a CPU located on the node where the message
queue resides, and then set the MMRs with the memory address and length of the
message queue and the vector and CPU's apicid. And then repeat, as there are
actually two message queues required by this driver.


I hope this helps answer your question, or at least shows you what problem
we are trying to solve.

Thanks,
Dean

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ