[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241014090339.2478-1-kprateek.nayak@amd.com>
Date: Mon, 14 Oct 2024 09:03:34 +0000
From: K Prateek Nayak <kprateek.nayak@....com>
To: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Thomas Gleixner <tglx@...utronix.de>,
<linux-kernel@...r.kernel.org>, Sebastian Andrzej Siewior
<bigeasy@...utronix.de>
CC: Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt
<rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman
<mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>, "Paul E.
McKenney" <paulmck@...nel.org>, Rik van Riel <riel@...riel.com>, "Neeraj
Upadhyay" <neeraj.upadhyay@...nel.org>, Leonardo Bras <leobras@...hat.com>,
Thorsten Blum <thorsten.blum@...lux.com>, Zqiang <qiang.zhang1211@...il.com>,
Tejun Heo <tj@...nel.org>, NeilBrown <neilb@...e.de>, Caleb Sander Mateos
<csander@...estorage.com>, "Gautham R . Shenoy" <gautham.shenoy@....com>,
Chen Yu <yu.c.chen@...el.com>, Julia Lawall <Julia.Lawall@...ia.fr>, "K
Prateek Nayak" <kprateek.nayak@....com>
Subject: [PATCH v3 0/5] Idle Load Balance fixes and softirq enhancements
Hello everyone,
This is the third version with minor changes from the last, and some
more benchmarking data down below. Any and all feedback is highly
appreciated.
This series is based on:
git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
at commit 7266f0a6d3bb ("fs/bcachefs: Fix __wait_on_freeing_inode()
definition of waitqueue entry")
Since commit b2a02fc43a1f ("smp: Optimize
send_call_function_single_ipi()"), an idle CPU in TIF_POLLING_NRFLAG can
be pulled out of idle by setting TIF_NEED_RESCHED instead of sending an
actual IPI. This affects at least three scenarios that have been
described below:
1. A need_resched() check within a call function does not necessarily
indicate a task wakeup since a CPU intending to send an IPI to an
idle target in TIF_POLLING_NRFLAG mode can simply queue the
SMP-call-function and set the TIF_NEED_RESCHED flag to pull the
polling target out of idle. The SMP-call-function will be executed by
flush_smp_call_function_queue() on the idle-exit path. On x86, where
mwait_idle_with_hints() sets TIF_POLLING_NRFLAG for long idling,
this leads to idle load balancer bailing out early since
need_resched() check in nohz_csd_func() returns true in most
instances.
2. A TIF_POLLING_NRFLAG idling CPU woken up to process an IPI will end
up calling schedule() even in cases where the call function does not
wake up a new task on the idle CPU, thus delaying the idle re-entry.
3. Julia Lawall reported a case where a softirq raised from a
SMP-call-function on an idle CPU will wake up ksoftirqd since
flush_smp_call_function_queue() executes in the idle thread's
context. This can throw off the idle load balancer by making the idle
CPU appear busy since ksoftirqd just woke on the said CPU [1].
Solution to (2.) was sent independently in [2] since it was not
dependent on the changes enclosed in this series which reworks some
PREEMPT_RT specific bits.
(1.) Was solved by dropping the need_resched() check in nohz_csd_func()
(please refer Patch 2/5 for the full version of the explanation) which
led to a splat on PREEMPT_RT kernels [3].
Since flush_smp_call_function_queue() and the following
do_softirq_post_smp_call_flush() runs with interrupts disabled, it is
not ideal for the IRQ handlers to raise a SOFTIRQ, prolonging the IRQs
disabled section especially on PREEMPT_RT kernels. For the time being,
the WARN_ON_ONCE() in do_softirq_post_smp_call_flush() has been adjusted
to allow raising a SCHED_SOFTIRQ from flush_smp_call_function_queue()
however its merit can be debated on this RFC.
With the above solution, problem discussed in (3.) is even more
prominent with idle load balancing waking up ksoftirqd to unnecessarily
(please refer Patch 5/5 for a detailed explanation). v1 attempted to
solve this by introducing a per-cpu variable to keep track on an
impending call to do_softirq(). Peter suggested reusing the
softirq_ctrl::cnt that PREEMPT_RT uses to prevent wakeup of ksoftirqd
and unifying should_wakeup_ksoftirqd() [4]. Patch 3 and 4 prepares for
this unification and Patch 5 adds and uses a new interface for
flush_smp_call_function_queue() to convey that a call do do_softirq() is
pending and there is no need to wakeup ksoftirqd.
Chenyu had reported a regression when running a modified version of
ipistorm that performs a fixed set of IPIs between two CPUs on his
setup with the whole v1 applied. I've benchmarked this series on both an
AMD and an Intel system to catch any significant regression early.
Following are the numbers from a dual socket Intel Ice Lake Xeon server
(2 x 32C/64T) and 3rd Generation AMD EPYC system (2 x 64C/128T) running
ipistorm between CPU8 and CPU16 (unless stated otherwise with *):
base: tip/sched/core at commit 7266f0a6d3bb ("fs/bcachefs: Fix
__wait_on_freeing_inode() definition of waitqueue entry")
==================================================================
Test : ipistorm (modified)
Units : % improvement over base kernel in IPI throughput
Interpretation: Higher is better
======================= Intel Ice Lake Xeon ======================
kernel: [pct imp]
performance gov, boost off, idle=poll -3.86%
performance gov, boost off, idle=poll * -3.32%
==================== 3rd Generation AMD EPYC =====================
kernel: [pct imp]
performance gov, boost on, !PREEMPT_RT 1.07%
performance gov, boost on, PREEMPT_RT 19.51%
==================================================================
* cross node setup used CPU 16 on Node 0 and CPU 17 on Node 1 on the
dual socket Intel Ice Lake Xeon system.
Improvements on PREEMPT_RT can perhaps be attributed to cacheline
aligning the per-cpu softirq_ctrl variable.
Julia Lawall reported reduction in the number of load balancing attempts
on v6.11 based tip:sched/core at NUMA level. The issue was root-caused
to commit 3dcac251b066 ("sched/core: Introduce SM_IDLE and an idle
re-entry fast-path in __schedule()") which would skip the
newidle_balance() if schedule_idle() was called without any task wakeups
on the idle CPUs to favor a faster idle re-entry. To rule out any
surprises from this series in particular, I tested the bt.B.x benchmark
where she originally observed this behavior on. Following are the
numbers from a dual socket Intel Ice Lake Xeon server (2 x 32C/32T smt
off):
==================================================================
Test : bt.B.x (OMP variant)
Units : % improvement over base kernel in Mop/s throughput
Interpretation: Higher is better
======================= Intel Ice Lake Xeon ======================
kernel: [pct imp]
performance gov, boost off, idle=poll, smt off 1.09%
==================================================================
I did not see any discernable difference with this one over the base
kernel.
[1] https://lore.kernel.org/lkml/fcf823f-195e-6c9a-eac3-25f870cb35ac@inria.fr/
[2] https://lore.kernel.org/lkml/20240809092240.6921-1-kprateek.nayak@amd.com/
[3] https://lore.kernel.org/lkml/225e6d74-ed43-51dd-d1aa-c75c86dd58eb@amd.com/
[4] https://lore.kernel.org/lkml/20240710150557.GB27299@noisy.programming.kicks-ass.net/
---
v2..v3:
o Removed ifdefs around local_lock_t. (Peter)
o Reworded Patch 1 to add more details on raising SCHED_SOFTIRQ from
flush_smp_call_function_queue() and why it should be okay on
PREEMPT_RT.
o Updated the trace data in Patch 5.
o More benchmarking.
v2: https://lore.kernel.org/lkml/20240904111223.1035-1-kprateek.nayak@amd.com/
v1..v2:
o Broke the PREEMPT_RT unification and idle load balance fixes into
separate series (this) and post the SM_IDLE fast-path enhancements
separately.
o Worked around the splat on PREEMPT_RT kernel caused by raising
SCHED_SOFTIRQ from nohz_csd_func() in context of
flush_smp_call_function_queue() which is undesirable on PREEMPT_RT
kernels. (Please refer to commit 1a90bfd22020 ("smp: Make softirq
handling RT safe in flush_smp_call_function_queue()")
o Reuse softirq_ctrl::cnt from PREEMPT_RT to prevent unnecessary
wakeups of ksoftirqd. (Peter)
This unifies should_wakeup_ksoftirqd() and adds an interface to
indicate an impending call to do_softirq (set_do_softirq_pending())
and clear it just before fulfilling the promise
(clr_do_softirq_pending()).
o More benchmarking.
v1: https://lore.kernel.org/lkml/20240710090210.41856-1-kprateek.nayak@amd.com/
--
K Prateek Nayak (5):
softirq: Allow raising SCHED_SOFTIRQ from SMP-call-function on RT
kernel
sched/core: Remove the unnecessary need_resched() check in
nohz_csd_func()
softirq: Mask reads of softirq_ctrl.cnt with SOFTIRQ_MASK for
PREEMPT_RT
softirq: Unify should_wakeup_ksoftirqd()
softirq: Avoid unnecessary wakeup of ksoftirqd when a call to
do_sofirq() is pending
kernel/sched/core.c | 2 +-
kernel/sched/smp.h | 9 +++++
kernel/smp.c | 2 +
kernel/softirq.c | 93 +++++++++++++++++++++++++++------------------
4 files changed, 67 insertions(+), 39 deletions(-)
--
2.34.1
Powered by blists - more mailing lists