lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86eex7i35s.wl-maz@kernel.org>
Date:   Sat, 14 Dec 2019 10:59:11 +0000
From:   Marc Zyngier <maz@...nel.org>
To:     John Garry <john.garry@...wei.com>
Cc:     Ming Lei <ming.lei@...hat.com>, <tglx@...utronix.de>,
        "chenxiang (M)" <chenxiang66@...ilicon.com>,
        <bigeasy@...utronix.de>, <linux-kernel@...r.kernel.org>,
        <hare@...e.com>, <hch@....de>, <axboe@...nel.dk>,
        <bvanassche@....org>, <peterz@...radead.org>, <mingo@...hat.com>
Subject: Re: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity for managed interrupt

On Fri, 13 Dec 2019 12:08:54 +0000,
John Garry <john.garry@...wei.com> wrote:
> 
> Hi Marc,
> 
> >> JFYI, we're still testing this and the patch itself seems to work as
> >> intended.
> >> 
> >> Here's the kernel log if you just want to see how the interrupts are
> >> getting assigned:
> >> https://pastebin.com/hh3r810g
> > 
> > It is a bit hard to make sense of this dump, specially on such a wide
> > machine (I want one!) 
> 
> So do I :) That's the newer "D06CS" board.
> 
> without really knowing the topology of the system.
> 
> So it's 2x socket, each socket has 2x CPU dies, and each die has 6
> clusters of 4 CPUs, which gives 96 in total.
> 
> > 
> >> For me, I did get a performance boost for NVMe testing, but my
> >> colleague Xiang Chen saw a drop for our storage test of interest  -
> >> that's the HiSi SAS controller. We're trying to make sense of it now.
> > 
> > One of the difference is that with this patch, the initial affinity
> > is picked inside the NUMA node that matches the ITS. 
> 
> Is that even for managed interrupts? We're testing the storage
> controller which uses managed interrupts. I should have made that
> clearer.

The ITS driver doesn't care about the fact that an interrupt affinity
is 'managed' or not. And I don't think a low-level driver should, as
it will just follow whatever interrupt affinity it is requested to
use. If a managed interrupt has some requirements, then these
requirements better be explicit in terms of CPU affinity.

> In your case,
> > that's either node 0 or 2. But it is unclear whether which CPUs these
> > map to.
> > 
> > Given that I see interrupts mapped to CPUs 0-23 on one side, and 48-71
> > on the other, it looks like half of your machine gets starved, 
> 
> Seems that way.
> 
> So this is a mystery to me:
> 
> [   23.584192] picked CPU62 IRQ147
> 
> 147:          0          0          0          0          0          0
> 0          0          0          0          0          0  0          0
> 0          0          0          0          0       0          0
> 0          0          0          0 0          0          0          0
> 0          0          0     0          0          0          0
> 0          0          0          0          0          0          0
> 0          0    0          0          0          0          0
> 0          0         0          0          0          0          0
> 0   0          0          0          0          0          0
> 0        0          0          0          0          0          0  0
> 0          0          0          0          0          0       0
> 0          0          0          0          0 0          0          0
> 0          0          0          0     0          0          0
> 0          0   ITS-MSI 94404626 Edge      hisi_sas_v3_hw cq
> 
> 
> and
> 
> [   25.896728] picked CPU62 IRQ183
> 
> 183:          0          0          0          0          0          0
> 0          0          0          0          0          0  0          0
> 0          0          0          0          0       0          0
> 0          0          0          0 0          0          0          0
> 0          0          0     0          0          0          0
> 0          0          0          0          0          0          0
> 0          0    0          0          0          0          0
> 0          0         0          0          0          0          0
> 0   0          0          0          0          0          0
> 0        0          0          0          0          0          0  0
> 0          0          0          0          0          0       0
> 0          0          0          0          0 0          0          0
> 0          0          0          0     0          0          0
> 0          0   ITS-MSI 94437398 Edge      hisi_sas_v3_hw cq
> 
> 
> But mpstat reports for CPU62:
> 
> 12:44:58 AM  CPU    %usr   %nice    %sys %iowait    %irq   %soft
> %steal  %guest  %gnice   %idle
> 12:45:00 AM   62    6.54    0.00   42.99    0.00    6.54   12.15
> 0.00    0.00    6.54   25.23
> 
> I don't know what interrupts they are...

Clearly, they aren't your SAS interrupts. But the debug print do not
mean that these are the only interrupts that are targeting
CPU62. Looking at the 62nd column of /proc/interrupts should tell you
what fires (and my bet is on something like the timer).

> It's the "hisi_sas_v3_hw cq" interrupts which we're interested in.

Clearly, they aren't firing.

> and that
> > may be because no ITS targets the NUMA nodes they are part of.
> 
> So both storage controllers (which we're interested in for this test)
> are on socket #0, node #0.
> 
>  It would
> > be interesting to see what happens if you manually set the affinity
> > of the interrupts outside of the NUMA node.
> > 
> 
> Again, managed, so I don't think it's possible.

OK, we need to get back to what the actual requirements of a 'managed'
interrupt are, because there is clearly something that hasn't made it
into the core code...

	M.

-- 
Jazz is not dead, it just smells funny.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ