lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 12 Sep 2023 19:53:05 +0800
From:   Chen Yu <yu.c.chen@...el.com>
To:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
CC:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Tim Chen <tim.c.chen@...el.com>, Aaron Lu <aaron.lu@...el.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        "Steven Rostedt" <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        "Valentin Schneider" <vschneid@...hat.com>,
        K Prateek Nayak <kprateek.nayak@....com>,
        "Gautham R . Shenoy" <gautham.shenoy@....com>,
        <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 2/2] sched/fair: skip the cache hot CPU in
 select_idle_cpu()

Hi Mathieu,

thanks for the review,

On 2023-09-11 at 11:43:27 -0400, Mathieu Desnoyers wrote:
> On 9/11/23 11:26, Mathieu Desnoyers wrote:
> > On 9/10/23 22:50, Chen Yu wrote:
> [...]
> > > ---
> > >   kernel/sched/fair.c     | 30 +++++++++++++++++++++++++++---
> > >   kernel/sched/features.h |  1 +
> > >   kernel/sched/sched.h    |  1 +
> > >   3 files changed, 29 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index e20f50726ab8..fe3b760c9654 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -6629,6 +6629,21 @@ static void dequeue_task_fair(struct rq *rq,
> > > struct task_struct *p, int flags)
> > >       hrtick_update(rq);
> > >       now = sched_clock_cpu(cpu_of(rq));
> > >       p->se.prev_sleep_time = task_sleep ? now : 0;
> > > +#ifdef CONFIG_SMP
> > > +    /*
> > > +     * If this rq will become idle, and dequeued task is
> > > +     * a short sleeping one, check if we can reserve
> > > +     * this idle CPU for that task for a short while.
> > > +     * During this reservation period, other wakees will
> > > +     * skip this 'idle' CPU in select_idle_cpu(), and this
> > > +     * short sleeping task can pick its previous CPU in
> > > +     * select_idle_sibling(), which brings better cache
> > > +     * locality.
> > > +     */
> > > +    if (sched_feat(SIS_CACHE) && task_sleep && !rq->nr_running &&
> > > +        p->se.sleep_avg && p->se.sleep_avg <
> > > sysctl_sched_migration_cost)
> > > +        rq->cache_hot_timeout = now + p->se.sleep_avg;
> > 
> > This is really cool!
> > 
> > There is one scenario that worries me with this approach: workloads
> > that sleep for a long time and then have short blocked periods.
> > Those bursts will likely bring the average to values too high
> > to stay below sysctl_sched_migration_cost.
> > 
> > I wonder if changing the code above for the following would help ?
> > 
> > if (sched_feat(SIS_CACHE) && task_sleep && !rq->nr_running &&
> > p->se.sleep_avg)
> >      rq->cache_hot_timeout = now + min(sysctl_sched_migration_cost,
> > p->se.sleep_avg);
> > 
> > For tasks that have a large sleep_avg, it would activate this rq
> > "appear as not idle for rq selection" scheme for a window of
> > sysctl_sched_migration_cost. If the sleep ends up being a long one,
> > preventing other tasks from being migrated to this rq for a tiny
> > window should not matter performance-wise. I would expect that it
> > could help workloads that come in bursts.
> 
> There is perhaps a better way to handle bursts:
> 
> When calculating the sleep_avg, we actually only really care about
> the sleep time for short bursts, so we could use the sysctl_sched_migration_cost
> to select which of the sleeps should be taken into account in the avg.
> 
> We could rename the field "sleep_avg" to "burst_sleep_avg", and have:
> 
> u64 now = sched_clock_cpu(task_cpu(p));
> 
> if ((flags & ENQUEUE_WAKEUP) && last_sleep && cpu_online(task_cpu(p)) &&
>     now > last_sleep && now - last_sleep < sysctl_sched_migration_cost)
> 	update_avg(&p->se.burst_sleep_avg, now - last_sleep);
> 
> Then we can have this code is dequeue_task_fair:
> 
> if (sched_feat(SIS_CACHE) && task_sleep && !rq->nr_running && p->se.busrt_sleep_avg)
> 	rq->cache_hot_timeout = now + p->se.burst_sleep_avg;
> 
> Thoughts ?
> 

This looks reasonable, it would be fine grained to only monitor the short sleep duration
of that task. Let me try this approach to see if there is any difference.

thanks,
Chenyu

> Thanks,
> 
> Mathieu
> 
> -- 
> Mathieu Desnoyers
> EfficiOS Inc.
> https://www.efficios.com
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ