lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 1 Mar 2019 08:51:38 -0800
From:   Linus Torvalds <>
To:     Frederic Weisbecker <>
Cc:     LKML <>,
        Sebastian Andrzej Siewior <>,
        Peter Zijlstra <>,
        "David S . Miller" <>,
        Mauro Carvalho Chehab <>,
        Thomas Gleixner <>,
        "Paul E . McKenney" <>,
        Frederic Weisbecker <>,
        Pavan Kondeti <>,
        Ingo Molnar <>,
        Joel Fernandes <>
Subject: Re: [PATCH 00/37] softirq: Per vector masking v3

On Thu, Feb 28, 2019 at 7:45 PM Frederic Weisbecker <> wrote:
> Numbers are indeed missing. In fact this patchset mostly just brings an
> infrastructure. We have yet to pinpoint the most latency-inducing
> softirq disabled sites and make them disable only the vectors that
> are involved in a given lock.

Note that I think we pretty much know that already: the people who
have had issues have never actually really had issues with the actual
"disable softirq" paths, they've all been about actually *running* the
softirq's (and that in turn being a latency issue for running _other_
softirqs, and in particular for delaying them into softirqd).

Now, it may well be that yes, we'll have "block softirqs" code that
has issues too, but it's absolutely been swamped by the latencies for
actually running them so far.

Note that this is all really fairly independent of the whole masking
logic. Yes, the masking logic comes into play too (allowing you to run
a subset of softirq's at a time), but on the whole the complaints I've
seen have not been "the networking softirq takes so long that it
delays USB tasklet handling", but they have been along the lines of
"the networking softirq gets invoked so often that it then floods the
system and triggers softirqd, and _that_ then makes tasklet handling
latency go up insanely".

See the difference? Not the latency of softirq's disabled, but the
latency of one group of softirqs causing problems for another when
they all get batched together (and soft-scheduled to another context


Powered by blists - more mailing lists