[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZP7SYu+gxlc/YjHu@chenyu5-mobl2>
Date: Mon, 11 Sep 2023 16:40:02 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Aaron Lu <aaron.lu@...el.com>
CC: Peter Zijlstra <peterz@...radead.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Ingo Molnar <mingo@...hat.com>,
"Vincent Guittot" <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>,
Tim Chen <tim.c.chen@...el.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
"Daniel Bristot de Oliveira" <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
"K Prateek Nayak" <kprateek.nayak@....com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
<linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 2/2] sched/fair: skip the cache hot CPU in
select_idle_cpu()
Hi Aaron,
thanks for the review,
On 2023-09-11 at 15:26:29 +0800, Aaron Lu wrote:
> On Mon, Sep 11, 2023 at 10:50:00AM +0800, Chen Yu wrote:
> > When task p is woken up, the scheduler leverages select_idle_sibling()
> > to find an idle CPU for it. p's previous CPU is usually a preference
> > because it can improve cache locality. However in many cases the
> > previous CPU has already been taken by other wakees, thus p has to
> > find another idle CPU.
> >
> > Inspired by Mathieu's idea[1], consider the sleep time of the task.
> > If that task is a short sleeping one, keep p's previous CPU idle
> > for a short while. Later when p is woken up, it can choose its
> > previous CPU in select_idle_sibling(). When p's previous CPU is reserved,
> > other wakee is not allowed to choose this CPU in select_idle_idle().
> > The reservation period is set to the task's average sleep time. That
> > is to say, if p is a short sleeping task, there is no need to migrate
> > p to another idle CPU.
> >
> > This does not break the work conservation of the scheduler,
> > because wakee will still try its best to find an idle CPU.
> > The difference is that, different idle CPUs might have different
> > priorities. On the other hand, in theory this extra check could
> > increase the failure ratio of select_idle_cpu(), but per the
> > initial test result, no regression is detected.
> >
> > Baseline: tip/sched/core, on top of:
> > Commit 3f4feb58037a ("sched: Misc cleanups")
> >
> > Benchmark results on Intel Sapphire Rapids, 112 CPUs/socket, 2 sockets.
> > cpufreq governor is performance, turbo boost is disabled, C-states deeper
> > than C1 are disabled, Numa balancing is disabled.
> >
> > netperf
> > =======
> > case load baseline(std%) compare%( std%)
> > UDP_RR 56-threads 1.00 ( 1.34) +1.05 ( 1.04)
> > UDP_RR 112-threads 1.00 ( 7.94) -0.68 ( 14.42)
> > UDP_RR 168-threads 1.00 ( 33.17) +49.63 ( 5.96)
> > UDP_RR 224-threads 1.00 ( 13.52) +122.53 ( 18.50)
> >
> > Noticeable improvements of netperf is observed in 168 and 224 threads
> > cases.
> >
> > hackbench
> > =========
> > case load baseline(std%) compare%( std%)
> > process-pipe 1-groups 1.00 ( 5.61) -4.69 ( 1.48)
> > process-pipe 2-groups 1.00 ( 8.74) -0.24 ( 3.10)
> > process-pipe 4-groups 1.00 ( 3.52) +1.61 ( 4.41)
> > process-sockets 1-groups 1.00 ( 4.73) +2.32 ( 0.95)
> > process-sockets 2-groups 1.00 ( 1.27) -3.29 ( 0.97)
> > process-sockets 4-groups 1.00 ( 0.09) +0.24 ( 0.09)
> > threads-pipe 1-groups 1.00 ( 10.44) -5.88 ( 1.49)
> > threads-pipe 2-groups 1.00 ( 19.15) +5.31 ( 12.90)
> > threads-pipe 4-groups 1.00 ( 1.74) -5.01 ( 6.44)
> > threads-sockets 1-groups 1.00 ( 1.58) -1.79 ( 0.43)
> > threads-sockets 2-groups 1.00 ( 1.19) -8.43 ( 6.91)
> > threads-sockets 4-groups 1.00 ( 0.10) -0.09 ( 0.07)
> >
> > schbench(old)
> > ========
> > case load baseline(std%) compare%( std%)
> > normal 1-mthreads 1.00 ( 0.63) +1.28 ( 0.37)
> > normal 2-mthreads 1.00 ( 8.33) +1.58 ( 2.83)
> > normal 4-mthreads 1.00 ( 2.48) -2.98 ( 3.28)
> > normal 8-mthreads 1.00 ( 3.97) +5.01 ( 1.28)
> >
> > No much difference is observed in hackbench/schbench, due to the
> > run-to-run variance.
> >
> > Link: https://lore.kernel.org/lkml/20230905171105.1005672-2-mathieu.desnoyers@efficios.com/ #1
> > Suggested-by: Tim Chen <tim.c.chen@...el.com>
> > Signed-off-by: Chen Yu <yu.c.chen@...el.com>
> > ---
> > kernel/sched/fair.c | 30 +++++++++++++++++++++++++++---
> > kernel/sched/features.h | 1 +
> > kernel/sched/sched.h | 1 +
> > 3 files changed, 29 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e20f50726ab8..fe3b760c9654 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6629,6 +6629,21 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> > hrtick_update(rq);
> > now = sched_clock_cpu(cpu_of(rq));
> > p->se.prev_sleep_time = task_sleep ? now : 0;
> > +#ifdef CONFIG_SMP
> > + /*
> > + * If this rq will become idle, and dequeued task is
> > + * a short sleeping one, check if we can reserve
> > + * this idle CPU for that task for a short while.
> > + * During this reservation period, other wakees will
> > + * skip this 'idle' CPU in select_idle_cpu(), and this
> > + * short sleeping task can pick its previous CPU in
> > + * select_idle_sibling(), which brings better cache
> > + * locality.
> > + */
> > + if (sched_feat(SIS_CACHE) && task_sleep && !rq->nr_running &&
> > + p->se.sleep_avg && p->se.sleep_avg < sysctl_sched_migration_cost)
> > + rq->cache_hot_timeout = now + p->se.sleep_avg;
>
> Should this be written as:
> rq->cache_hot_timeout = max(rq->cache_hot_timeout, now + p->se.sleep_avg);
> ?
>
> A even earlier task may have a bigger sleep_avg and overwriting
> rq->cache_hot_timeout here with an earlier time may cause that task
> migrate. Not sure how much impact this can have, just someting hit my
> brain while reading this code.
>
My previous thought was that, SIS_CACHE only honors the average sleep time
of the latest dequeued task. Say, task p1 goes to sleep, then task p2 is woken
up and godes to sleeps, we only honor the sleep time of p2. Because we don't
know how long p2 has been executed, and how much p2 has polluted the L1/L2 cache,
and does it still make sense to let p1 be woken up on its previous
CPU(is this CPU still cache hot for p1?)
> > +#endif
> > }
> >
> > #ifdef CONFIG_SMP
> > @@ -6982,8 +6997,13 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
> > static inline int __select_idle_cpu(int cpu, struct task_struct *p)
> > {
> > if ((available_idle_cpu(cpu) || sched_idle_cpu(cpu)) &&
> > - sched_cpu_cookie_match(cpu_rq(cpu), p))
> > + sched_cpu_cookie_match(cpu_rq(cpu), p)) {
> > + if (sched_feat(SIS_CACHE) &&
> > + sched_clock_cpu(cpu) < cpu_rq(cpu)->cache_hot_timeout)
> > + return -1;
> > +
>
> Maybe introduce a new function that also considers rq->cache_hot_timeout,
> like available_idle_cpu_migrate() so that above and below logic can be
> simplified a bit?
>
Yes, that would be simpler, I'll do in next version.
> I was thinking to simply add that rq->cache_hot_timeout check to
> available_idle_cpu() but then a long sleeping task could be forced to
> migrate if its prev_cpu happens to just deschedule a task that sets rq's
> cache_hot_timeout. I guess that's why you chose to only change the idle
> semantic in select_idle_cpu() but not in select_idle_sibling()?
>
Yes, sort of. And the reason I did not put this cache hot check in available_idle_cpu()
or idle_cpu() was mainly because these APIs are generic and could be invoked by select_idle_sibling().
If the task fall asleep and woken up quickly, its previous idle CPU will also be skipped,
thus no one could use this CPU within the cache hot period, including the cache-hot task
itself.
thanks,
Chenyu
Powered by blists - more mailing lists