[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yk2ppI60P98E2Qj5@casper.infradead.org>
Date: Wed, 6 Apr 2022 15:54:28 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Liao Chang <liaochang1@...wei.com>
Cc: mcgrof@...nel.org, keescook@...omium.org, yzaikin@...gle.com,
tglx@...utronix.de, clg@...d.org, nitesh@...hat.com,
edumazet@...gle.com, peterz@...radead.org, joshdon@...gle.com,
masahiroy@...nel.org, nathan@...nel.org, akpm@...ux-foundation.org,
vbabka@...e.cz, gustavoars@...nel.org, arnd@...db.de,
chris@...isdown.name, dmitry.torokhov@...il.com,
linux@...musvillemoes.dk, daniel@...earbox.net,
john.ogness@...utronix.de, will@...nel.org, dave@...olabs.net,
frederic@...nel.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, heying24@...wei.com,
guohanjun@...wei.com, weiyongjun1@...wei.com
Subject: Re: [RFC 0/3] softirq: Introduce softirq throttling
On Wed, Apr 06, 2022 at 10:52:38AM +0800, Liao Chang wrote:
> Kernel check for pending softirqs periodically, they are performed in a
> few points of kernel code, such as irq_exit() and __local_bh_enable_ip(),
> softirqs that have been activated by a given CPU must be executed on the
> same CPU, this characteristic of softirq is always a potentially
> "dangerous" operation, because one CPU might be end up very busy while
> the other are most idle.
>
> Above concern is proven in a networking user case: recenlty, we
> engineer find out the time used for connection re-establishment on
> kernel v5.10 is 300 times larger than v4.19, meanwhile, softirq
> monopolize almost 99% of CPU. This problem stem from that the connection
> between Sender and Receiver node get lost, the NIC driver on Sender node
> will keep raising NET_TX softirq before connection recovery. The system
> log show that most of softirq is performed from __local_bh_enable_ip(),
> since __local_bh_enable_ip is used widley in kernel code, it is very
> easy to run out most of CPU, and the user-mode application can't obtain
> enough CPU cycles to establish connection as soon as possible.
Shouldn't you fix that bug instead? This seems like papering over the
bad effects of a bug and would make it harder to find bugs like this in
the future. Essentially, it's the same as a screaming hardware interrupt,
except that it's a software interrupt, so we can fix the bug instead of
working around broken hardware.
Powered by blists - more mailing lists