[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220513062427.2375743-1-dtcccc@linux.alibaba.com>
Date: Fri, 13 May 2022 14:24:27 +0800
From: Tianchen Ding <dtcccc@...ux.alibaba.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>
Cc: linux-kernel@...r.kernel.org
Subject: [RFC PATCH] sched: Queue task on wakelist in the same llc if the wakee cpu is idle
We notice the commit 518cd6234178 ("sched: Only queue remote wakeups
when crossing cache boundaries") disabled queuing tasks on wakelist when
the cpus share llc. This is because, at that time, the scheduler must
send IPIs to do ttwu_queue_wakelist. Nowadays, ttwu_queue_wakelist also
supports TIF_POLLING, so this is not a problem now when the wakee cpu is
in idle polling.
Benefits:
Queuing the task on idle cpu can help improving performance on waker cpu
and utilization on wakee cpu, and further improve locality because
the wakee cpu can handle its own rq. This patch helps improving rt on
our real java workloads where wakeup happens frequently.
Does this patch bring IPI flooding?
For archs with TIF_POLLING_NRFLAG (e.g., x86), there will be no
difference if the wakee cpu is idle polling. If the wakee cpu is idle
but not polling, the later check_preempt_curr() will send IPI too.
For archs without TIF_POLLING_NRFLAG (e.g., arm64), the IPI is
unavoidable, since the later check_preempt_curr() will send IPI when
wakee cpu is idle.
Benchmark:
running schbench -m 2 -t 8 on 8269CY:
without patch:
Latency percentiles (usec)
50.0000th: 10
75.0000th: 14
90.0000th: 16
95.0000th: 16
*99.0000th: 17
99.5000th: 20
99.9000th: 23
min=0, max=28
with patch:
Latency percentiles (usec)
50.0000th: 6
75.0000th: 8
90.0000th: 9
95.0000th: 9
*99.0000th: 10
99.5000th: 10
99.9000th: 14
min=0, max=16
We've also tested unixbench and see about 10% improvement on Pipe-based
Context Switching, and no performance regression on other test cases.
For arm64, we've tested schbench and unixbench on Kunpeng920, the
results show that, the improvement is not as obvious as on x86, and
there's no performance regression.
Signed-off-by: Tianchen Ding <dtcccc@...ux.alibaba.com>
---
kernel/sched/core.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 51efaabac3e4..cae5011a8b1f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3820,6 +3820,9 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
if (!cpu_active(cpu))
return false;
+ if (cpu == smp_processor_id())
+ return false;
+
/*
* If the CPU does not share cache, then queue the task on the
* remote rqs wakelist to avoid accessing remote data.
@@ -3827,6 +3830,12 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
if (!cpus_share_cache(smp_processor_id(), cpu))
return true;
+ /*
+ * If the CPU is idle, let itself do activation to improve utilization.
+ */
+ if (available_idle_cpu(cpu))
+ return true;
+
/*
* If the task is descheduling and the only running task on the
* CPU then use the wakelist to offload the task activation to
@@ -3842,9 +3851,6 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
{
if (sched_feat(TTWU_QUEUE) && ttwu_queue_cond(cpu, wake_flags)) {
- if (WARN_ON_ONCE(cpu == smp_processor_id()))
- return false;
-
sched_clock_cpu(cpu); /* Sync clocks across CPUs */
__ttwu_queue_wakelist(p, cpu, wake_flags);
return true;
--
2.27.0
Powered by blists - more mailing lists