[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220406025241.191300-1-liaochang1@huawei.com>
Date: Wed, 6 Apr 2022 10:52:38 +0800
From: Liao Chang <liaochang1@...wei.com>
To: <mcgrof@...nel.org>, <keescook@...omium.org>, <yzaikin@...gle.com>,
<liaochang1@...wei.com>, <tglx@...utronix.de>, <clg@...d.org>,
<nitesh@...hat.com>, <edumazet@...gle.com>, <peterz@...radead.org>,
<joshdon@...gle.com>, <masahiroy@...nel.org>, <nathan@...nel.org>,
<akpm@...ux-foundation.org>, <vbabka@...e.cz>,
<gustavoars@...nel.org>, <arnd@...db.de>, <chris@...isdown.name>,
<dmitry.torokhov@...il.com>, <linux@...musvillemoes.dk>,
<daniel@...earbox.net>, <john.ogness@...utronix.de>,
<will@...nel.org>, <dave@...olabs.net>, <frederic@...nel.org>
CC: <linux-kernel@...r.kernel.org>, <linux-fsdevel@...r.kernel.org>,
<heying24@...wei.com>, <guohanjun@...wei.com>,
<weiyongjun1@...wei.com>
Subject: [RFC 0/3] softirq: Introduce softirq throttling
Kernel check for pending softirqs periodically, they are performed in a
few points of kernel code, such as irq_exit() and __local_bh_enable_ip(),
softirqs that have been activated by a given CPU must be executed on the
same CPU, this characteristic of softirq is always a potentially
"dangerous" operation, because one CPU might be end up very busy while
the other are most idle.
Above concern is proven in a networking user case: recenlty, we
engineer find out the time used for connection re-establishment on
kernel v5.10 is 300 times larger than v4.19, meanwhile, softirq
monopolize almost 99% of CPU. This problem stem from that the connection
between Sender and Receiver node get lost, the NIC driver on Sender node
will keep raising NET_TX softirq before connection recovery. The system
log show that most of softirq is performed from __local_bh_enable_ip(),
since __local_bh_enable_ip is used widley in kernel code, it is very
easy to run out most of CPU, and the user-mode application can't obtain
enough CPU cycles to establish connection as soon as possible.
Although kernel limit the running time of __do_softirq(), it does not
control the running time of entire softirqs on given CPU, so this
patchset introduce a safeguard mechanism that allows the system
administrator to allocate bandwidth for used by softirqs, this safeguard
mechanism is known as Sofitrq Throttling and is controlled by two
parameters in the /proc file system:
/proc/sys/kernel/sofitrq_period_ms
Defines the period in ms(millisecond) to be considered as 100% of CPU
bandwidth, the default value is 1,000 ms(1second). Changes to the
value of the period must be very well thought out, as too long or too
short are beyond one's expectation.
/proc/sys/kernel/softirq_runtime_ms
Define the bandwidth available to softirqs on each CPU, the default
values is 950 ms(0.95 second) or, in other words, 95% of the CPU
bandwidth. Setting negative integer to this value means that softirqs
my use up to 100% CPU times.
The default values for softirq throttling mechanism define that 95% of
the CPU time can be used by softirqs. The remaing 5% will be devoted to
other kinds of tasks, such as syscall, interrupt, exception, real-time
processes and normal processes when the softirqs workload in system are
very heavy. System administrator can tune above two parameters to
satifies the need of system performance and stability.
Liao Chang (3):
softirq: Add two parameters to control CPU bandwidth for use by
softirq
softirq: Do throttling when softirqs use up its bandwidth
softirq: Introduce statistics about softirq throttling
fs/proc/softirqs.c | 18 +++++
include/linux/interrupt.h | 7 ++
include/linux/kernel_stat.h | 27 +++++++
init/Kconfig | 10 +++
kernel/softirq.c | 155 ++++++++++++++++++++++++++++++++++++
kernel/sysctl.c | 16 ++++
6 files changed, 233 insertions(+)
--
2.17.1
Powered by blists - more mailing lists