lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7dc37b35d8ec6c78e75969d8c6c2d2e9@kernel.org>
Date:   Mon, 20 Jan 2020 18:45:34 +0000
From:   Marc Zyngier <maz@...nel.org>
To:     John Garry <john.garry@...wei.com>
Cc:     linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
        Jason Cooper <jason@...edaemon.net>,
        Ming Lei <ming.lei@...hat.com>,
        "chenxiang (M)" <chenxiang66@...ilicon.com>
Subject: Re: [PATCH] irqchip/gic-v3-its: Balance initial LPI affinity across
 CPUs

Hi John,

On 2020-01-20 18:21, John Garry wrote:
> On 20/01/2020 17:42, Marc Zyngier wrote:
> 
> Hi Marc,
> 
>>>>      static u64 its_irq_get_msi_base(struct its_device *its_dev)
>>>> @@ -2773,28 +2829,34 @@ static int its_irq_domain_activate(struct
>>>> irq_domain *domain,
>>>>    {
>>>>    	struct its_device *its_dev = irq_data_get_irq_chip_data(d);
>>>>    	u32 event = its_get_event_id(d);
>>>> -	const struct cpumask *cpu_mask = cpu_online_mask;
>>>> -	int cpu;
>>>> +	int ret = 0, cpu = nr_cpu_ids;
>>>> +	const struct cpumask *reqmask;
>>>> +	cpumask_var_t mask;
>>>>    -	/* get the cpu_mask of local node */
>>>> -	if (its_dev->its->numa_node >= 0)
>>>> -		cpu_mask = cpumask_of_node(its_dev->its->numa_node);
>>>> +	if (irqd_affinity_is_managed(d))
>>>> +		reqmask = irq_data_get_affinity_mask(d);
>>>> +	else
>>>> +		reqmask = cpu_online_mask;
>>>>    -	/* Bind the LPI to the first possible CPU */
>>>> -	cpu = cpumask_first_and(cpu_mask, cpu_online_mask);
>>>> -	if (cpu >= nr_cpu_ids) {
>>>> -		if (its_dev->its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_23144)
>>>> -			return -EINVAL;
>>>> +	if (!alloc_cpumask_var(&mask, GFP_KERNEL))
>>>> +		return -ENOMEM;
>>>>    -		cpu = cpumask_first(cpu_online_mask);
>>>> +	its_compute_affinity(d, reqmask, mask);
>>>> +	cpu = its_pick_target_cpu(mask);
>>>> +	if (cpu >= nr_cpu_ids) {
>>>> +		ret = -EINVAL;
>>>> +		goto out;
>>>>    	}
>>>>    +	atomic_inc(per_cpu_ptr(&cpu_lpi_count, cpu));
>>> 
>>> I wonder if we should only consider managed interrupts in this
>>> accounting?
>>> 
>>> So cpu0 is effectively going to be excluded from the balancing, as it
>>> will have so many lpis targeted.
>> 
>> Maybe, but only if the provided managed affinity gives you the
>> opportunity of placing the LPI somewhere else.
> 
> Of course, if there's no other cpu in the mask then so be it.
> 
> If the managed
>> affinity says CPU0 only, then that's where you end up.
>> 
> 
> If my debug code is correct (with the above fix), cpu0 had 763
> interrupts targeted on my D06 initially :)

You obviously have too many devices in this machine... ;-)

> But it's not just cpu0. I find initial non-managed interrupt affinity
> masks are set generally on cpu cluster/numa node masks, so the first
> cpus in those masks are bit over-subscribed, so then we may be
> spreading the managed interrupts over less cpus in the mask.
> 
> This is a taste of lpi distribution on my 96 core system:
> cpu0 763
> cpu1 2
> cpu3 1
> cpu4 2
> cpu5 2
> cpu6 0
> cpu7 0
> cpu8 2
> cpu9 1
> cpu10 0
> ...
> cpu16 2
> ...
> cpu24 8
> ...
> cpu48 10 (numa node boundary)
> ...

We're stuck between a rock and a hard place here:

(1) We place all interrupts on the least loaded CPU that matches
     the affinity -> results in performance issues on some funky
     HW (like D05's SAS controller).

(2) We place managed interrupts on the least loaded CPU that matches
     the affinity -> we have artificial load on NUMA boundaries, and
     reduced spread of overlapping managed interrupts.

(3) We don't account for non-managed LPIs, and we run the risk of
     unpredictable performance because we don't really know where
     the *other* interrupts are.

My personal preference would be to go for (1), as in my original post.
I find (3) the least appealing, because we don't track things anymore.
(2) feels like "the least of all evils", as it is a decent performance
gain, seems to give predictable performance, and doesn't regress lesser
systems...

I'm definitely open to suggestions here.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ