[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZQrm6e9apSMwDh5E@chenyu5-mobl2.ccr.corp.intel.com>
Date: Wed, 20 Sep 2023 20:34:49 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
CC: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>,
Tim Chen <tim.c.chen@...el.com>, Aaron Lu <aaron.lu@...el.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
"Steven Rostedt" <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
"Valentin Schneider" <vschneid@...hat.com>,
K Prateek Nayak <kprateek.nayak@....com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
<linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 2/2] sched/fair: skip the cache hot CPU in
select_idle_cpu()
Hi Mathieu,
On 2023-09-11 at 11:26:50 -0400, Mathieu Desnoyers wrote:
> On 9/10/23 22:50, Chen Yu wrote:
> > When task p is woken up, the scheduler leverages select_idle_sibling()
> > to find an idle CPU for it. p's previous CPU is usually a preference
> > because it can improve cache locality. However in many cases the
> > previous CPU has already been taken by other wakees, thus p has to
> > find another idle CPU.
> >
> > Inspired by Mathieu's idea[1], consider the sleep time of the task.
> > If that task is a short sleeping one, keep p's previous CPU idle
> > for a short while. Later when p is woken up, it can choose its
> > previous CPU in select_idle_sibling(). When p's previous CPU is reserved,
> > other wakee is not allowed to choose this CPU in select_idle_idle().
> > The reservation period is set to the task's average sleep time. That
> > is to say, if p is a short sleeping task, there is no need to migrate
> > p to another idle CPU.
> >
> > This does not break the work conservation of the scheduler,
> > because wakee will still try its best to find an idle CPU.
> > The difference is that, different idle CPUs might have different
> > priorities. On the other hand, in theory this extra check could
> > increase the failure ratio of select_idle_cpu(), but per the
> > initial test result, no regression is detected.
> >
> > Baseline: tip/sched/core, on top of:
> > Commit 3f4feb58037a ("sched: Misc cleanups")
> >
> > Benchmark results on Intel Sapphire Rapids, 112 CPUs/socket, 2 sockets.
> > cpufreq governor is performance, turbo boost is disabled, C-states deeper
> > than C1 are disabled, Numa balancing is disabled.
> >
> > netperf
> > =======
> > case load baseline(std%) compare%( std%)
> > UDP_RR 56-threads 1.00 ( 1.34) +1.05 ( 1.04)
> > UDP_RR 112-threads 1.00 ( 7.94) -0.68 ( 14.42)
> > UDP_RR 168-threads 1.00 ( 33.17) +49.63 ( 5.96)
> > UDP_RR 224-threads 1.00 ( 13.52) +122.53 ( 18.50)
> >
> > Noticeable improvements of netperf is observed in 168 and 224 threads
> > cases.
> >
> > hackbench
> > =========
> > case load baseline(std%) compare%( std%)
> > process-pipe 1-groups 1.00 ( 5.61) -4.69 ( 1.48)
> > process-pipe 2-groups 1.00 ( 8.74) -0.24 ( 3.10)
> > process-pipe 4-groups 1.00 ( 3.52) +1.61 ( 4.41)
> > process-sockets 1-groups 1.00 ( 4.73) +2.32 ( 0.95)
> > process-sockets 2-groups 1.00 ( 1.27) -3.29 ( 0.97)
> > process-sockets 4-groups 1.00 ( 0.09) +0.24 ( 0.09)
> > threads-pipe 1-groups 1.00 ( 10.44) -5.88 ( 1.49)
> > threads-pipe 2-groups 1.00 ( 19.15) +5.31 ( 12.90)
> > threads-pipe 4-groups 1.00 ( 1.74) -5.01 ( 6.44)
> > threads-sockets 1-groups 1.00 ( 1.58) -1.79 ( 0.43)
> > threads-sockets 2-groups 1.00 ( 1.19) -8.43 ( 6.91)
> > threads-sockets 4-groups 1.00 ( 0.10) -0.09 ( 0.07)
> >
> > schbench(old)
> > ========
> > case load baseline(std%) compare%( std%)
> > normal 1-mthreads 1.00 ( 0.63) +1.28 ( 0.37)
> > normal 2-mthreads 1.00 ( 8.33) +1.58 ( 2.83)
> > normal 4-mthreads 1.00 ( 2.48) -2.98 ( 3.28)
> > normal 8-mthreads 1.00 ( 3.97) +5.01 ( 1.28)
> >
> > No much difference is observed in hackbench/schbench, due to the
> > run-to-run variance.
> >
> > Link: https://lore.kernel.org/lkml/20230905171105.1005672-2-mathieu.desnoyers@efficios.com/ #1
> > Suggested-by: Tim Chen <tim.c.chen@...el.com>
> > Signed-off-by: Chen Yu <yu.c.chen@...el.com>
> > ---
> > kernel/sched/fair.c | 30 +++++++++++++++++++++++++++---
> > kernel/sched/features.h | 1 +
> > kernel/sched/sched.h | 1 +
> > 3 files changed, 29 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e20f50726ab8..fe3b760c9654 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6629,6 +6629,21 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> > hrtick_update(rq);
> > now = sched_clock_cpu(cpu_of(rq));
> > p->se.prev_sleep_time = task_sleep ? now : 0;
> > +#ifdef CONFIG_SMP
> > + /*
> > + * If this rq will become idle, and dequeued task is
> > + * a short sleeping one, check if we can reserve
> > + * this idle CPU for that task for a short while.
> > + * During this reservation period, other wakees will
> > + * skip this 'idle' CPU in select_idle_cpu(), and this
> > + * short sleeping task can pick its previous CPU in
> > + * select_idle_sibling(), which brings better cache
> > + * locality.
> > + */
> > + if (sched_feat(SIS_CACHE) && task_sleep && !rq->nr_running &&
> > + p->se.sleep_avg && p->se.sleep_avg < sysctl_sched_migration_cost)
> > + rq->cache_hot_timeout = now + p->se.sleep_avg;
>
> This is really cool!
>
> There is one scenario that worries me with this approach: workloads
> that sleep for a long time and then have short blocked periods.
> Those bursts will likely bring the average to values too high
> to stay below sysctl_sched_migration_cost.
>
> I wonder if changing the code above for the following would help ?
>
> if (sched_feat(SIS_CACHE) && task_sleep && !rq->nr_running && p->se.sleep_avg)
> rq->cache_hot_timeout = now + min(sysctl_sched_migration_cost, p->se.sleep_avg);
>
> For tasks that have a large sleep_avg, it would activate this rq
> "appear as not idle for rq selection" scheme for a window of
> sysctl_sched_migration_cost. If the sleep ends up being a long one,
> preventing other tasks from being migrated to this rq for a tiny
> window should not matter performance-wise. I would expect that it
> could help workloads that come in bursts.
>
After a revist, I thought maybe above approach is better. Because
this method considers all the historic data rather than only the short
sleeping duration:
if ((flags & ENQUEUE_WAKEUP) && last_sleep && cpu_online(task_cpu(p)) &&
now > last_sleep && now - last_sleep < sysctl_sched_migration_cost)
update_avg(&p->se.burst_sleep_avg, now - last_sleep);
Say, if a task sleeps for 1 ms, and in all the subsequent context switch
it will sleep much longer than 1 ms, then the burst_avg_sleep will remain
1 ms which does not reflect the behavior of this task.
update_avg() is actually (Exponentially Weighted Moving-Average) of
new_avg = 0.875 * old_avg + 0.125 * lastest_sleep_duration
which approximately reflects the latest 1/0.125 = 8 sample data. I guess
the 8 is small enought to capture the burst sleep duration.
Unless we want to capture the scenario that, during the latest 8 sleep
events, there are 1 small sleeps, and 7 large sleep behavior and
we still want to honor that 1 small sleep and reserve the idle CPU.
Then either we can use the method you proposed to use the min logic, or
use the logic you proposed in another reply, but enhance it by clearing
that burst avergae sleep duration if there is a continious 8 long sleep
event, which looks a little overkilled. Anyway, I can get some data based
on this.
thanks,
Chenyu
Powered by blists - more mailing lists