lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090203205727.GA4460@elte.hu>
Date:	Tue, 3 Feb 2009 21:57:27 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Linus Torvalds <torvalds@...ux-foundation.org>,
	"David S. Miller" <davem@...emloft.net>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	Jesse Barnes <jesse.barnes@...el.com>,
	"Rafael J. Wysocki" <rjw@...k.pl>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Andreas Schwab <schwab@...e.de>, Len Brown <lenb@...nel.org>
Subject: Re: Reworking suspend-resume sequence (was: Re: PCI PM: Restore
	standard config registers of all devices early)


* Linus Torvalds <torvalds@...ux-foundation.org> wrote:

> So I wouldn't worry too much. I think this is interesting mostly from a 
> performance standpoint - MSI interrupts are supposed to be fast, and under 
> heavy interrupt load I could easily see something like
> 
>  - cpu1: handles interrupt, has acked it, calls down to the handler
> 
>  - the handler clears the original irq source, but another packet (or disk 
>    completion) happens almost immediately
> 
>  - cpu2 takes the second interrupt, but it's still IRQ_INPROGRESS, so it 
>    masks.
> 
>  - cpu1 gets back and unmasks etc and now really handles it because of 
>    IRQ_PENDING.
> 
> Note how the mask/unmask were all just costly extra overhead over the PCI 
> bus. If we're talking something like high-performance 10Gbit ethernet (or 
> even maybe fast SSD disks), driver writers actually do count PCI cycles, 
> because a single PCI read can be several hundred ns, and if you take a 
> thousand interrupts per second, it does add up.

In practice MSI (and in particular MSI-X) irq sources tend to be bound to a 
single CPU on modern x86 hardware. The kernel does not do IRQ balancing 
anymore, nor does the hardware. We have a slow irq-balancer daemon 
(irqbalanced) in user-space. So singular IRQ sources, especially when they 
are MSI, tend to be 99.9% on the same CPU. Changing affinity is possible and 
has to always work reliably, but it is a performance slowpath.

An increasing trend is to have multiple irqs per device (multiple descriptor 
rings, split rx and tx rings with separate irq sources): and each IRQ can 
get balanced to a separate CPU. But those irqs cannot interact on a ->mask() 
level as each IRQ has its separate irq_desc.

The most advanced way of balancing IRQs is not widespread yet: it is where 
devices actually interpret the payload and send completions dynamically to 
differing CPUs - depending on things like the TCP/IP hash value or a 
in-descriptor "target CPU". That way we could get completion on the CPU 
where the work was submitted from. (and where the data structures are the 
most cache-localized)

That principle works both for networking and for other IO transports - but 
we have little support for it yet. It would work really well for workloads 
where one physical device is shared by many CPUs.

(A lesser method that approximates this is the use of lots of 
submission/completion rings per device and their binding to cpus - but that 
can never really approach the number of CPUs really possible in a system.)

And in this most advanced mode of MSI IRQs, and if MSI devices had the 
ability to direct IRQs to a specific CPU (they dont have that right now 
AFAICT), we'd run into the overhead scenarios you describe above, and your 
edge-triggered flow is the most performant one.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ