lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874kfxw9zv.ffs@nanos.tec.linutronix.de>
Date:   Fri, 23 Apr 2021 12:50:12 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     John Garry <john.garry@...wei.com>
Cc:     "linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>,
        Marc Zyngier <maz@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Ingo Molnar <mingo@...nel.org>
Subject: Re: Question on threaded handlers for managed interrupts

John,

On Thu, Apr 22 2021 at 17:10, John Garry wrote:
> I am finding that I can pretty easily trigger a system hang for certain 
> scenarios with my storage controller.
>
> So I'm getting something like this when running moderately heavy data 
> throughput:
>
> Starting 6 processes
> [70.656622] sched: RT throttling activatedB/s][r=356k,w=0 IOPS][eta
> 01h:14m:43s]
> [  207.632161] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:ta
> 01h:12m:26s]
> [  207.638261] rcu:  0-...!: (1 GPs behind)
> idle=312/1/0x4000000000000000 softirq=508/512 fqs=0
> [  207.646777] rcu:  1-...!: (1 GPs behind) idle=694/0/0x0
>
> It ends pretty badly - see [0].

Obviously.

> The multi-queue storage controller (see [1] for memory refresh, but
> note that I can also trigger on PCI device host controller as well) is
> using managed interrupts and threaded handlers. Since the threaded
> handler uses SCHED_FIFO, aren't we always vulnerable to this situation
> with the managed interrupt and threaded handler combo? Would the
> advice be to just use irq polling here?

This is a really good question. Most interrupt handlers are not running
exceedingly long or come in with high frequency, but of course this
problem exists.

The network people have solved it with NAPI which disables the interrupt
in the device and polls it from softirq context (which might be then
delegated to ksoftirqd) until it's drained.

I'm not familiar with the block/multiqueue layer to be able to tell
whether such a concept exists there as well.

OTOH, the way how you splitted the handling into hard/thread context
provides already the base for this.

The missing piece is infrastructure at the irq/scheduler core level to
handle this transparently.

I have some horrible ideas how to solve that, but I'm sure the scheduler
wizards can come up with a reasonable and generic solution.

Thanks,

        tglx




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ