[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZQByY4rDvjejRRs5@chenyu5-mobl2.ccr.corp.intel.com>
Date: Tue, 12 Sep 2023 22:14:59 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
CC: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>,
Tim Chen <tim.c.chen@...el.com>, Aaron Lu <aaron.lu@...el.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
"Steven Rostedt" <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
"Valentin Schneider" <vschneid@...hat.com>,
K Prateek Nayak <kprateek.nayak@....com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
<linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 2/2] sched/fair: skip the cache hot CPU in
select_idle_cpu()
On 2023-09-12 at 10:06:27 -0400, Mathieu Desnoyers wrote:
> On 9/12/23 07:53, Chen Yu wrote:
> > Hi Mathieu,
> >
> > thanks for the review,
> >
> > On 2023-09-11 at 11:43:27 -0400, Mathieu Desnoyers wrote:
> > > On 9/11/23 11:26, Mathieu Desnoyers wrote:
> > > > On 9/10/23 22:50, Chen Yu wrote:
> > > [...]
> > > > > ---
> > > > > kernel/sched/fair.c | 30 +++++++++++++++++++++++++++---
> > > > > kernel/sched/features.h | 1 +
> > > > > kernel/sched/sched.h | 1 +
> > > > > 3 files changed, 29 insertions(+), 3 deletions(-)
> > > > >
> > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > > index e20f50726ab8..fe3b760c9654 100644
> > > > > --- a/kernel/sched/fair.c
> > > > > +++ b/kernel/sched/fair.c
> > > > > @@ -6629,6 +6629,21 @@ static void dequeue_task_fair(struct rq *rq,
> > > > > struct task_struct *p, int flags)
> > > > > hrtick_update(rq);
> > > > > now = sched_clock_cpu(cpu_of(rq));
> > > > > p->se.prev_sleep_time = task_sleep ? now : 0;
> > > > > +#ifdef CONFIG_SMP
> > > > > + /*
> > > > > + * If this rq will become idle, and dequeued task is
> > > > > + * a short sleeping one, check if we can reserve
> > > > > + * this idle CPU for that task for a short while.
> > > > > + * During this reservation period, other wakees will
> > > > > + * skip this 'idle' CPU in select_idle_cpu(), and this
> > > > > + * short sleeping task can pick its previous CPU in
> > > > > + * select_idle_sibling(), which brings better cache
> > > > > + * locality.
> > > > > + */
> > > > > + if (sched_feat(SIS_CACHE) && task_sleep && !rq->nr_running &&
> > > > > + p->se.sleep_avg && p->se.sleep_avg <
> > > > > sysctl_sched_migration_cost)
> > > > > + rq->cache_hot_timeout = now + p->se.sleep_avg;
> > > >
> > > > This is really cool!
> > > >
> > > > There is one scenario that worries me with this approach: workloads
> > > > that sleep for a long time and then have short blocked periods.
> > > > Those bursts will likely bring the average to values too high
> > > > to stay below sysctl_sched_migration_cost.
> > > >
> > > > I wonder if changing the code above for the following would help ?
> > > >
> > > > if (sched_feat(SIS_CACHE) && task_sleep && !rq->nr_running &&
> > > > p->se.sleep_avg)
> > > > rq->cache_hot_timeout = now + min(sysctl_sched_migration_cost,
> > > > p->se.sleep_avg);
> > > >
> > > > For tasks that have a large sleep_avg, it would activate this rq
> > > > "appear as not idle for rq selection" scheme for a window of
> > > > sysctl_sched_migration_cost. If the sleep ends up being a long one,
> > > > preventing other tasks from being migrated to this rq for a tiny
> > > > window should not matter performance-wise. I would expect that it
> > > > could help workloads that come in bursts.
> > >
> > > There is perhaps a better way to handle bursts:
> > >
> > > When calculating the sleep_avg, we actually only really care about
> > > the sleep time for short bursts, so we could use the sysctl_sched_migration_cost
> > > to select which of the sleeps should be taken into account in the avg.
> > >
> > > We could rename the field "sleep_avg" to "burst_sleep_avg", and have:
> > >
> > > u64 now = sched_clock_cpu(task_cpu(p));
> > >
> > > if ((flags & ENQUEUE_WAKEUP) && last_sleep && cpu_online(task_cpu(p)) &&
> > > now > last_sleep && now - last_sleep < sysctl_sched_migration_cost)
> > > update_avg(&p->se.burst_sleep_avg, now - last_sleep);
> > >
> > > Then we can have this code is dequeue_task_fair:
> > >
> > > if (sched_feat(SIS_CACHE) && task_sleep && !rq->nr_running && p->se.busrt_sleep_avg)
> > > rq->cache_hot_timeout = now + p->se.burst_sleep_avg;
> > >
> > > Thoughts ?
> > >
> >
> > This looks reasonable, it would be fine grained to only monitor the short sleep duration
> > of that task. Let me try this approach to see if there is any difference.
> >
>
> One more tweak: given that more than one task can update the cache_hot_timeout forward
> one after another, and given that some tasks have larger burst_sleep_avg values than
> others, I suspect we want to keep the forward movement monotonic with something like:
>
> if (sched_feat(SIS_CACHE) && task_sleep && !rq->nr_running && p->se.burst_sleep_avg &&
> rq->cache_hot_timeout < now + p->se.burst_sleep_avg)
> rq->cache_hot_timeout = now + p->se.burst_sleep_avg;
>
Yeah, Aaron has mentioned this too:
https://lore.kernel.org/lkml/ZP7SYu+gxlc%2FYjHu@chenyu5-mobl2/
May I know the benefit of keeping forward movement monotonic?
I thought that, should we only honor the latest dequeued task's burst_sleep_avg?
Because we don't know whether the old deuqued task's cache has been scribbled by the latest
dequeued task or not, does it still make sense to wake up the old dequeued task on its
previous CPU?
thanks,
Chenyu
Powered by blists - more mailing lists