lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <Pine.LNX.4.64.0702110753390.8245@montezuma.fsmlabs.com>
Date:	Sun, 11 Feb 2007 08:16:39 -0800 (PST)
From:	Zwane Mwaikambo <zwane@...radead.org>
To:	"Eric W. Biederman" <ebiederm@...ssion.com>
Cc:	Ashok Raj <ashok.raj@...el.com>, Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org,
	"Lu, Yinghai" <yinghai.lu@....com>,
	Natalie Protasevich <protasnb@...il.com>,
	Andi Kleen <ak@...e.de>, Coywolf Qi Hunt <coywolf@...ecn.org>
Subject: Re: What are the real ioapic rte programming constraints?

On Sun, 11 Feb 2007, Eric W. Biederman wrote:

> What I am looking at doing is:
> 
> mask
> read io_apic 
> -- Past this point no more irqs are expected from the io_apic
> -- Now I work to drain any inflight/pending instances of the irq 
> send ipi to all irq destinations cpus and wait for it to return
> read lapic
> disable local irqs
> take irq lock
> -- Now no more irqs are expected to arrive
> reprogram vector and destination
> enable local irqs
> unmask
> 
> What I need to ensure is that I have a point where I will not receive any
> new messages from an ioapic about a particular irq anymore.  Even if
> everything is working perfectly setting the disable bit is not enough
> because there could be an irq message in flight. So I need to give any
> in flight irqs a chance to complete.  
> 
> With a little luck that logic will cover your 7500 disable race as
> well. If not and there is a reasonable work around we should look at
> that.  This is not a speed critical path so we can afford to do a
> little more work.

The 7500 issue isn't actually a race but a disease, if you mask a pending 
irq in its RTE, the PCI hub generates an INTx message corresponding to 
that irq. This apparently was done to support booting OSes without APIC 
support. So the following would occur;

- irqN pending on IOAPIC
- mask
=> INTx message for irqN

Unfortunately it appears the below code would also be affected by this as 
well, the appropriate reference is;

2.15.2 PCI Express* Legacy INTx Support and Boot Interrupt
http://download.intel.com/design/chipsets/datashts/30262802.pdf

> static void mask_get_irq(unsigned int irq)
> {
> 	struct irq_desc *desc = irq_desc + irq;
> 	int cpu;
> 
> 	spin_lock(&vector_lock);
> 
> 	/*
> 	 * Mask the irq so it will no longer occur
> 	 */
> 	desc->chip->mask(irq);
> 
> 	/* If I can run a lower priority vector on another cpu
> 	 * then obviously the irq has completed on that cpu.  SMP call
> 	 * function is lower priority then all of the hardware
> 	 * irqs.
> 	 */
> 	for_each_cpu_mask(cpu, desc->affinity)
> 		smp_call_function_single(cpu, affinity_noop, NULL, 0, 1);
> 
> 	/*
> 	 * Ensure irqs have cleared the local cpu
> 	 */
> 	lapic_sync();
> 	local_irq_disable();
> 	lapic_sync();
> 	spin_lock(&desc->lock);
> }
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ