lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1709161212160.2105@nanos>
Date:   Sat, 16 Sep 2017 12:15:56 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     YASUAKI ISHIMATSU <yasu.isimatu@...il.com>
cc:     Kashyap Desai <kashyap.desai@...adcom.com>,
        Hannes Reinecke <hare@...e.de>,
        Marc Zyngier <marc.zyngier@....com>,
        Christoph Hellwig <hch@....de>, axboe@...nel.dk,
        mpe@...erman.id.au, keith.busch@...el.com, peterz@...radead.org,
        LKML <linux-kernel@...r.kernel.org>, linux-scsi@...r.kernel.org,
        Sumit Saxena <sumit.saxena@...adcom.com>,
        Shivasharan Srikanteshwara 
        <shivasharan.srikanteshwara@...adcom.com>
Subject: Re: system hung up when offlining CPUs

On Thu, 14 Sep 2017, YASUAKI ISHIMATSU wrote:
> On 09/13/2017 09:33 AM, Thomas Gleixner wrote:
> >> Question - "what happens once __cpu_disable is called and some of the queued
> >> interrupt has affinity to that particular CPU ?"
> >> I assume ideally those pending/queued Interrupt should be migrated to
> >> remaining online CPUs. It should not be unhandled if we want to avoid such
> >> IO timeout.
> > 
> > Can you please provide the following information, before and after
> > offlining the last CPU in the affinity set:
> > 
> > # cat /proc/irq/$IRQNUM/smp_affinity_list
> > # cat /proc/irq/$IRQNUM/effective_affinity
> > # cat /sys/kernel/debug/irq/irqs/$IRQNUM
> > 
> > The last one requires: CONFIG_GENERIC_IRQ_DEBUGFS=y
> 
> Here are one irq's info of megasas:
> 
> - Before offline CPU
> /proc/irq/70/smp_affinity_list
> 24-29
> 
> /proc/irq/70/effective_affinity
> 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,3f000000
> 
> /sys/kernel/debug/irq/irqs/70
> handler:  handle_edge_irq
> status:   0x00004000
> istate:   0x00000000
> ddepth:   0
> wdepth:   0
> dstate:   0x00609200
>             IRQD_ACTIVATED
>             IRQD_IRQ_STARTED
>             IRQD_MOVE_PCNTXT
>             IRQD_AFFINITY_SET
>             IRQD_AFFINITY_MANAGED

So this uses managed affinity, which means that once the last CPU in the
affinity mask goes offline, the interrupt is shut down by the irq core
code, which is the case:

> dstate:   0x00a39000
>             IRQD_IRQ_DISABLED
>             IRQD_IRQ_MASKED
>             IRQD_MOVE_PCNTXT
>             IRQD_AFFINITY_SET
>             IRQD_AFFINITY_MANAGED
>             IRQD_MANAGED_SHUTDOWN  <---------------

So the irq core code works as expected, but something in the
driver/scsi/block stack seems to fiddle with that shut down queue.

I only can tell about the inner workings of the irq code, but I have no
clue about the rest.

Thanks,

	tglx


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ