lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120521144812.GD28930@dhcp-26-207.brq.redhat.com>
Date:	Mon, 21 May 2012 16:48:13 +0200
From:	Alexander Gordeev <agordeev@...hat.com>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	Arjan van de Ven <arjan@...radead.org>,
	linux-kernel@...r.kernel.org, x86@...nel.org,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Cyrill Gorcunov <gorcunov@...nvz.org>,
	Yinghai Lu <yinghai@...nel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 2/3] x86: x2apic/cluster: Make use of lowest priority
 delivery mode

On Mon, May 21, 2012 at 02:40:26PM +0200, Ingo Molnar wrote:
> But that is not 'perfectly balanced' in many cases.
> 
> When the hardware round-robins the interrupts then each 
> interrupt will go to a 'cache cold' CPU in essence. This is 
> pretty much the worst thing possible thing to do in most cases: 
> while it's "perfectly balanced" in the sense of distributing 
> cycles evenly between CPUs, each interrupt handler execution 
> will generate an avalance of cachemisses, for cachelines there 
> were modified in the previous invocation of the irq.

Absolutely.

There are at least two more offenders :) exercising lowest priority + logical
addressing similarly. So in this regard the patch is nothing new:


static inline unsigned int
default_cpu_mask_to_apicid(const struct cpumask *cpumask)
{
	return cpumask_bits(cpumask)[0] & APIC_ALL_CPUS;
}

static unsigned int summit_cpu_mask_to_apicid(const struct cpumask *cpumask)
{
	unsigned int round = 0;
	int cpu, apicid = 0;

	/*
	 * The cpus in the mask must all be on the apic cluster.
	 */
	for_each_cpu(cpu, cpumask) {
		int new_apicid = early_per_cpu(x86_cpu_to_logical_apicid, cpu);

		if (round && APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) {
			printk("%s: Not a valid mask!\n", __func__);
			return BAD_APICID;
		}
		apicid |= new_apicid;
		round++;
	}
	return apicid;
}

> One notable exception is when the CPUs are SMT/Hyperthreading 
> siblings, in that case they are sharing even the L1 cache, so 
> there's very little cost to round-robining the IRQs within the 
> CPU mask.
> 
> But AFAICS irqbalanced will spread irqs on wider masks than SMT 
> sibling boundaries, exposing us to the above performance 
> problem.

I would speculate it is irqbalanced who should be (in case of x86) cluster-
agnostic and ask for a mask while the apic layer is just execute or at least
report what was set. But that is a different topic.

Considering a bigger picture, it appears strange to me that apic is the layer
to take decision whether to make CPU a target or not. It is especially true
when one means a particluar cpumask, wants it to be set, but still is not able
to do that due to the current limitation.

> So I think we need to tread carefully here.

Kernel parameter? IRQ line flag? Totally opposed? :)

> Thanks,

> 
> 	Ingo

-- 
Regards,
Alexander Gordeev
agordeev@...hat.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ