lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120521093648.GC28930@dhcp-26-207.brq.redhat.com>
Date:	Mon, 21 May 2012 11:36:49 +0200
From:	Alexander Gordeev <agordeev@...hat.com>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	linux-kernel@...r.kernel.org, x86@...nel.org,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Cyrill Gorcunov <gorcunov@...nvz.org>,
	Yinghai Lu <yinghai@...nel.org>
Subject: Re: [PATCH 2/3] x86: x2apic/cluster: Make use of lowest priority
 delivery mode

On Mon, May 21, 2012 at 10:22:40AM +0200, Ingo Molnar wrote:
> 
> * Alexander Gordeev <agordeev@...hat.com> wrote:
> 
> > Currently x2APIC in logical destination mode delivers 
> > interrupts to a single CPU, no matter how many CPUs were 
> > specified in the destination cpumask.
> > 
> > This fix enables delivery of interrupts to multiple CPUs by 
> > bit-ORing Logical IDs of destination CPUs that have matching 
> > Cluster ID.
> > 
> > Because only one cluster could be specified in a message 
> > destination address, the destination cpumask is tried for a 
> > cluster that contains maximum number of CPUs matching this 
> > cpumask. The CPUs in this cluster are selected to receive the 
> > interrupts while all other CPUs (in the cpumask) are ignored.
> 
> I'm wondering how you tested this. AFAICS current irqbalanced 
> will create masks but on x2apic only the first CPU is targeted 
> by the kernel.

Right, that is what this patch is intended to change. So I use:
'hwclock --test' to generate rtc interruts
/proc/interrupts to check where/how many interrupts were delevired
/proc/irq/8/smp_affinity to check how clusters are chosen

> So, in theory, prior the patch you should be seeing irqs go to 
> only one CPU, while after the patch they are spread out amongst 
> the CPU. If it's using LowestPrio delivery then we depend on the 
> hardware doing this for us - how does this work out in practice, 
> are the target CPUs round-robin-ed, with a new CPU for every new 
> IRQ delivered?


That is exactly what I can observe.

As of 'target CPUs round-robin-ed' and 'with a new CPU for every new IRQ
delivered' -- that is something we can not control as you noted. Nor do we
care to my understanding.

I can not commit on every h/w out there obviously, but on my PowerEdge M910
with some half-dozen clusters with six CPU per each, the interrupts are
perfectly balanced among those ones present in IRTEs.


> Thanks,
> 
> 	Ingo

-- 
Regards,
Alexander Gordeev
agordeev@...hat.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ