[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241030071557.1422-4-kprateek.nayak@amd.com>
Date: Wed, 30 Oct 2024 07:15:57 +0000
From: K Prateek Nayak <kprateek.nayak@....com>
To: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Sebastian Andrzej Siewior
<bigeasy@...utronix.de>, Clark Williams <clrkwllms@...nel.org>, "Steven
Rostedt" <rostedt@...dmis.org>, <linux-kernel@...r.kernel.org>,
<linux-rt-devel@...ts.linux.dev>
CC: Dietmar Eggemann <dietmar.eggemann@....com>, Ben Segall
<bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, Valentin Schneider
<vschneid@...hat.com>, Thomas Gleixner <tglx@...utronix.de>, Tejun Heo
<tj@...nel.org>, Jens Axboe <axboe@...nel.dk>, NeilBrown <neilb@...e.de>,
Zqiang <qiang.zhang1211@...il.com>, Caleb Sander Mateos
<csander@...estorage.com>, "Gautham R . Shenoy" <gautham.shenoy@....com>,
Chen Yu <yu.c.chen@...el.com>, Julia Lawall <Julia.Lawall@...ia.fr>, "K
Prateek Nayak" <kprateek.nayak@....com>, Julia Lawall <julia.lawall@...ia.fr>
Subject: [PATCH v4 3/3] sched/core: Prevent wakeup of ksoftirqd during idle load balance
Scheduler raises a SCHED_SOFTIRQ to trigger a load balancing event on
from the IPI handler on the idle CPU. Since the softirq can be raised
from flush_smp_call_function_queue(), it can end up waking up ksoftirqd,
which can give an illusion of the idle CPU being busy when doing an idle
load balancing.
Adding a trace_printk() in nohz_csd_func() at the spot of raising
SCHED_SOFTIRQ and enabling trace events for sched_switch, sched_wakeup,
and softirq_entry (for SCHED_SOFTIRQ vector alone) helps observing the
current behavior:
<idle>-0 [000] dN.1.: nohz_csd_func: Raising SCHED_SOFTIRQ from nohz_csd_func
<idle>-0 [000] dN.4.: sched_wakeup: comm=ksoftirqd/0 pid=16 prio=120 target_cpu=000
<idle>-0 [000] .Ns1.: softirq_entry: vec=7 [action=SCHED]
<idle>-0 [000] .Ns1.: softirq_exit: vec=7 [action=SCHED]
<idle>-0 [000] d..2.: sched_switch: prev_comm=swapper/0 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/0 next_pid=16 next_prio=120
ksoftirqd/0-16 [000] d..2.: sched_switch: prev_comm=ksoftirqd/0 prev_pid=16 prev_prio=120 prev_state=S ==> next_comm=swapper/0 next_pid=0 next_prio=120
...
ksoftirqd is woken up before the idle thread calls
do_softirq_post_smp_call_flush() which can make the runqueue appear
busy and prevent the idle load balancer from pulling task from an
overloaded runqueue towards itself[1].
Since the softirq raised is guranteed to be serviced in irq_exit() or
via do_softirq_post_smp_call_flush(), set SCHED_SOFTIRQ without checking
the need to wakeup ksoftirq for idle load balancing.
Following are the observations with the changes when enabling the same
set of events:
<idle>-0 [000] dN.1.: nohz_csd_func: Raising SCHED_SOFTIRQ for nohz_idle_balance
<idle>-0 [000] dN.1.: softirq_raise: vec=7 [action=SCHED]
<idle>-0 [000] .Ns1.: softirq_entry: vec=7 [action=SCHED]
No unnecessary ksoftirqd wakeups are seen from idle task's context to
service the softirq.
Fixes: b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()")
Reported-by: Julia Lawall <julia.lawall@...ia.fr>
Closes: https://lore.kernel.org/lkml/fcf823f-195e-6c9a-eac3-25f870cb35ac@inria.fr/ [1]
Suggested-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Signed-off-by: K Prateek Nayak <kprateek.nayak@....com>
---
v3..v4:
o New patch based on Sebastian's suggestion.
---
kernel/sched/core.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index aaf99c0bcb49..2ee3621d6e7e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1244,7 +1244,18 @@ static void nohz_csd_func(void *info)
rq->idle_balance = idle_cpu(cpu);
if (rq->idle_balance) {
rq->nohz_idle_balance = flags;
- raise_softirq_irqoff(SCHED_SOFTIRQ);
+
+ /*
+ * Don't wakeup ksoftirqd when raising SCHED_SOFTIRQ
+ * since the idle load balancer may mistake wakeup of
+ * ksoftirqd as a genuine task wakeup and bail out from
+ * load balancing early. Since it is guaranteed that
+ * pending softirqs will be handled soon, either on
+ * irq_exit() or via do_softirq_post_smp_call_flush(),
+ * raise SCHED_SOFTIRQ without checking the need to
+ * wakeup ksoftirqd.
+ */
+ __raise_softirq_irqoff(SCHED_SOFTIRQ);
}
}
--
2.34.1
Powered by blists - more mailing lists