lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120521124025.GC17065@gmail.com>
Date:	Mon, 21 May 2012 14:40:26 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Alexander Gordeev <agordeev@...hat.com>,
	Arjan van de Ven <arjan@...radead.org>
Cc:	linux-kernel@...r.kernel.org, x86@...nel.org,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Cyrill Gorcunov <gorcunov@...nvz.org>,
	Yinghai Lu <yinghai@...nel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 2/3] x86: x2apic/cluster: Make use of lowest priority
 delivery mode


* Alexander Gordeev <agordeev@...hat.com> wrote:

> > So, in theory, prior the patch you should be seeing irqs go 
> > to only one CPU, while after the patch they are spread out 
> > amongst the CPU. If it's using LowestPrio delivery then we 
> > depend on the hardware doing this for us - how does this 
> > work out in practice, are the target CPUs round-robin-ed, 
> > with a new CPU for every new IRQ delivered?
> 
> That is exactly what I can observe.
> 
> As of 'target CPUs round-robin-ed' and 'with a new CPU for 
> every new IRQ delivered' -- that is something we can not 
> control as you noted. Nor do we care to my understanding.
> 
> I can not commit on every h/w out there obviously, but on my 
> PowerEdge M910 with some half-dozen clusters with six CPU per 
> each, the interrupts are perfectly balanced among those ones 
> present in IRTEs.

But that is not 'perfectly balanced' in many cases.

When the hardware round-robins the interrupts then each 
interrupt will go to a 'cache cold' CPU in essence. This is 
pretty much the worst thing possible thing to do in most cases: 
while it's "perfectly balanced" in the sense of distributing 
cycles evenly between CPUs, each interrupt handler execution 
will generate an avalance of cachemisses, for cachelines there 
were modified in the previous invocation of the irq.

One notable exception is when the CPUs are SMT/Hyperthreading 
siblings, in that case they are sharing even the L1 cache, so 
there's very little cost to round-robining the IRQs within the 
CPU mask.

But AFAICS irqbalanced will spread irqs on wider masks than SMT 
sibling boundaries, exposing us to the above performance 
problem.

So I think we need to tread carefully here.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ