lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171213163653.GD8264@e110439-lin>
Date:   Wed, 13 Dec 2017 16:36:53 +0000
From:   Patrick Bellasi <patrick.bellasi@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
        Ingo Molnar <mingo@...hat.com>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Paul Turner <pjt@...gle.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Todd Kjos <tkjos@...roid.com>,
        Joel Fernandes <joelaf@...gle.com>
Subject: Re: [PATCH v2 2/4] sched/fair: add util_est on top of PELT

On 13-Dec 17:19, Peter Zijlstra wrote:
> On Tue, Dec 05, 2017 at 05:10:16PM +0000, Patrick Bellasi wrote:
> > @@ -562,6 +577,12 @@ struct task_struct {
> >  
> >  	const struct sched_class	*sched_class;
> >  	struct sched_entity		se;
> > +	/*
> > +	 * Since we use se.avg.util_avg to update util_est fields,
> > +	 * this last can benefit from being close to se which
> > +	 * also defines se.avg as cache aligned.
> > +	 */
> > +	struct util_est			util_est;
> >  	struct sched_rt_entity		rt;
> >  #ifdef CONFIG_CGROUP_SCHED
> >  	struct task_group		*sched_task_group;
> 
> 
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index b19552a212de..8371839075fa 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -444,6 +444,7 @@ struct cfs_rq {
> >  	 * CFS load tracking
> >  	 */
> >  	struct sched_avg avg;
> > +	unsigned long util_est_runnable;
> >  #ifndef CONFIG_64BIT
> >  	u64 load_last_update_time_copy;
> >  #endif
> 
> 
> So you put the util_est in task_struct (not sched_entity) but the
> util_est_runnable in cfs_rq (not rq). Seems inconsistent.

One goal was to keep util_est variables close to the util_avg used to
load the filter, for caches affinity sakes.

The other goal was to have util_est data only for Tasks and CPU's
RQ, thus avoiding unused data for TG's RQ and SE.

Unfortunately the first goal does not allow to achieve completely the
second and, you right, the solution looks a bit inconsistent.

Do you think we should better disregard cache proximity and move
util_est_runnable to rq?

-- 
#include <best/regards.h>

Patrick Bellasi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ