lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 16 Dec 2019 18:00:11 +0000
From:   Marc Zyngier <maz@...nel.org>
To:     John Garry <john.garry@...wei.com>
Cc:     Ming Lei <ming.lei@...hat.com>, <tglx@...utronix.de>,
        "chenxiang (M)" <chenxiang66@...ilicon.com>,
        <bigeasy@...utronix.de>, <linux-kernel@...r.kernel.org>,
        <hare@...e.com>, <hch@....de>, <axboe@...nel.dk>,
        <bvanassche@....org>, <peterz@...radead.org>, <mingo@...hat.com>
Subject: Re: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity  for managed interrupt

Hi John,

On 2019-12-16 14:17, John Garry wrote:
> Hi Marc,
>
>>>
>>> I'm just wondering if non-managed interrupts should be included in
>>> the load balancing calculation? Couldn't irqbalance (if active) 
>>> start
>>> moving non-managed interrupts around anyway?
>> But they are, aren't they? See what we do in irq_set_affinity:
>> +        atomic_inc(per_cpu_ptr(&cpu_lpi_count, cpu));
>> +        atomic_dec(per_cpu_ptr(&cpu_lpi_count,
>> +                       its_dev->event_map.col_map[id]));
>> We don't try to "rebalance" anything based on that though, not that
>> I think we should.
>
> Ah sorry, I meant whether they should not be included. In
> its_irq_domain_activate(), we increment the per-cpu lpi count and 
> also
> use its_pick_target_cpu() to find the least loaded cpu. I am asking
> whether we should just stick with the old policy for non-managed
> interrupts here.
>
> After checking D05, I see a very significant performance hit for SAS
> controller performance - ~40% throughout lowering.

-ETOOMANYMOVINGPARTS.

> With this patch, now we have effective affinity targeted at seemingly
> "random" CPUs, as opposed to all just using CPU0. This affects
> performance.

And piling all interrupts on the same CPU does help?

> The difference is that when we use managed interrupts - like for NVME
> or D06 SAS controller - the irq cpu affinity mask matches the CPUs
> which enqueue the requests to the queue associated with the 
> interrupt.
> So there is an efficiency is enqueuing and deqeueing on same CPU 
> group
> - all related to blk multi-queue. And this is not the case for
> non-managed interrupts.

So you enqueue requests from CPU0 only? It seems a bit odd...

>>>> Please give this new patch a shot on your system (my D05 doesn't 
>>>> have
>>>> any managed devices):
>>>
>>> We could consider supporting platform msi managed interrupts, but I
>>> doubt the value.
>> It shouldn't be hard to do, and most of the existing code could be
>> moved to the generic level. As for the value, I'm not convinced
>> either. For example D05 uses the MBIGEN as an intermediate interrupt
>> controller, so MSIs are from the PoV of MBIGEN, and not the SAS 
>> device
>> attached to it. Not the best design...
>
> JFYI, I did raise this following topic before, but that's as far as I 
> got:
>
> https://marc.info/?l=linux-block&m=150722088314310&w=2

Yes. And that's probably not very hard, but the problem in your case is
that the D05 HW is not using MSIs... You'd have to provide an 
abstraction
for wired interrupts (please don't).

You'd be better off directly setting the affinity of the interrupts 
from
the driver, but I somehow can't believe that you're only submitting 
requests
from the same CPU, always. There must be something I'm missing.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ