lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 8 May 2020 09:09:58 -0700
From:   "Raj, Ashok" <ashok.raj@...el.com>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     "Raj, Ashok" <ashok.raj@...ux.intel.com>,
        Evan Green <evgreen@...omium.org>,
        Mathias Nyman <mathias.nyman@...ux.intel.com>, x86@...nel.org,
        linux-pci <linux-pci@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Bjorn Helgaas <bhelgaas@...gle.com>,
        "Ghorai, Sukumar" <sukumar.ghorai@...el.com>,
        "Amara, Madhusudanarao" <madhusudanarao.amara@...el.com>,
        "Nandamuri, Srikanth" <srikanth.nandamuri@...el.com>,
        Ashok Raj <ashok.raj@...el.com>
Subject: Re: MSI interrupt for xhci still lost on 5.6-rc6 after cpu hotplug

Hi Thomas,

On Fri, May 08, 2020 at 01:04:29PM +0200, Thomas Gleixner wrote:
> Ashok,
> 
> "Raj, Ashok" <ashok.raj@...el.com> writes:
> >> But as this last one is the migration thread, aka stomp machine, I
> >> assume this is a hotplug operation. Which means the CPU cannot handle
> >> interrupts anymore. In that case we check the old vector on the
> >> unplugged CPU in fixup_irqs() and do the retrigger from there.
> >> Can you please add tracing to that one as well?
> >
> > New tracelog attached. It just happened once.
> 
> Correct and it worked as expected.
> 
>      migration/3-24    [003] d..1   275.665751: msi_set_affinity: quirk[1] new vector allocated, new apic = 4 vector = 36 this apic = 6
>      migration/3-24    [003] d..1   275.665776: msi_set_affinity: Redirect to new vector 36 on old apic 6
>      migration/3-24    [003] d..1   275.665789: msi_set_affinity: Transition to new target apic 4 vector 36
>      migration/3-24    [003] d..1   275.665790: msi_set_affinity: Update Done [IRR 0]: irq 123 Nvec 36 Napic 4
>      migration/3-24    [003] d..1   275.666792: fixup_irqs: retrigger vector 33 irq 123
> 
> So looking at your trace further down, the problem is not the last
> one. It dies already before that:
> 
>            <...>-14    [001] d..1   284.901587: msi_set_affinity: quirk[1] new vector allocated, new apic = 6 vector = 33 this apic = 2
>            <...>-14    [001] d..1   284.901604: msi_set_affinity: Direct Update: irq 123 Ovec=33 Oapic 2 Nvec 33 Napic 6
>            
> Here, the interrupts stop coming in and that's just a regular direct
> update, i.e. same vector, different CPU. The update below is updating a
> dead device already.

So we lost the interrupt either before or after we migrated to apic2?

In the direct update when we migrate same vector but different CPU
if there was a pending IRR on the current CPU, where is that getting cleaned up?

> 
>      migration/3-24    [003] d..1   284.924960: msi_set_affinity: quirk[1] new vector allocated, new apic = 4 vector = 36 this apic = 6
>      migration/3-24    [003] d..1   284.924987: msi_set_affinity: Redirect to new vector 36 on old apic 6
>      migration/3-24    [003] d..1   284.924999: msi_set_affinity: Transition to new target apic 4 vector 36
>      migration/3-24    [003] d..1   284.925000: msi_set_affinity: Update Done [IRR 0]: irq 123 Nvec 36 Napic 4
> 
> TBH, I can't see anything what's wrong here from the kernel side and as
> this is new silicon and you're the only ones reporting this it seems
> that this is something which is specific to that particular
> hardware. Have you talked to the hardware people about this?
> 

After chasing it, I'm also trending to think that way. We had a question
about the moderation timer and how its affecting this behavior.
Mathias tried to turn off the moderation timer, and we are still able to 
see this hang. We are reaching out to the HW folks to get a handle on this.

With legacy MSI we can have these races and kernel is trying to do the
song and dance, but we see this happening even when IR is turned on.
Which is perplexing. I think when we have IR, once we do the change vector 
and flush the interrupt entry cache, if there was an outstandng one in 
flight it should be in IRR. Possibly should be clearned up by the
send_cleanup_vector() i suppose.

Cheers,
Ashok

Powered by blists - more mailing lists