lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140715015105.GA2532@intel.com>
Date:	Tue, 15 Jul 2014 09:51:05 +0800
From:	Yuyang Du <yuyang.du@...el.com>
To:	bsegall@...gle.com
Cc:	mingo@...hat.com, peterz@...radead.org,
	linux-kernel@...r.kernel.org, pjt@...gle.com,
	arjan.van.de.ven@...el.com, len.brown@...el.com,
	rafael.j.wysocki@...el.com, alan.cox@...el.com,
	mark.gross@...el.com, fengguang.wu@...el.com
Subject: Re: [PATCH 2/2 v2] sched: Rewrite per entity runnable load average
 tracking

Thanks, Ben.

On Mon, Jul 14, 2014 at 12:33:53PM -0700, bsegall@...gle.com wrote:

> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -282,9 +282,6 @@ static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp)
> >  	return grp->my_q;
> >  }
> >  
> > -static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq,
> > -				       int force_update);
> > -
> >  static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> >  {
> >  	if (!cfs_rq->on_list) {
> > @@ -304,8 +301,6 @@ static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> >  		}
> >  
> >  		cfs_rq->on_list = 1;
> > -		/* We should have no load, but we need to update last_decay. */
> > -		update_cfs_rq_blocked_load(cfs_rq, 0);
> 
> AFAICT this call was nonsense before your change, yes (it gets called by
> enqueue_entity_load_avg)?
> 
Yes, I think so.

> > @@ -667,18 +662,17 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se)
> >  #ifdef CONFIG_SMP
> >  static unsigned long task_h_load(struct task_struct *p);
> >  
> > -static inline void __update_task_entity_contrib(struct sched_entity *se);
> > -
> >  /* Give new task start runnable values to heavy its load in infant time */
> >  void init_task_runnable_average(struct task_struct *p)
> >  {
> >  	u32 slice;
> > +	struct sched_avg *sa = &p->se.avg;
> >  
> > -	p->se.avg.decay_count = 0;
> > +	sa->last_update_time = 0;
> > +	sa->period_contrib = 0;
> 
> sa->period_contrib = slice;

period_contrib should be strictly < 1024. I suppose sched_slice does not guarantee that.
So here I will give it 1023 to heavy the load.
 
> > +static __always_inline u64 decay_load64(u64 val, u64 n)
> > +{
> > +	if (likely(val <= UINT_MAX))
> > +		val = decay_load(val, n);
> > +	else {
> > +		/*
> > +		 * LOAD_AVG_MAX can last ~500ms (=log_2(LOAD_AVG_MAX)*32ms).
> > +		 * Since we have so big runnable load_avg, it is impossible
> > +		 * load_avg has not been updated for such a long time. So
> > +		 * LOAD_AVG_MAX is enough here.
> > +		 */
> 
> I mean, LOAD_AVG_MAX is irrelevant - the constant could just as well be
> 1<<20, or whatever, yes? In fact, if you're going to then turn it into a
> fraction of 1<<10, just do (with whatever temporaries you find most tasteful):
> 
> val *= (u32) decay_load(1 << 10, n);
> val >>= 10;
> 

LOAD_AVG_MAX is selected on purpose. The val arriving here specifies that it is really
big. So the decay_load may not decay it to 0 even period n is not small. If we use 1<<10
here, n=10*32 will decay it to 0, but val (larger than 1<<32) can survive.

But if even LOAD_AVG_MAX will decay to 0, it means in the current code, any runnable_avg_sum
will not survive, sicne LOAD_AVG_MAX is the upperbound.

> > +/*
> > + * Strictly, this se should use its parent cfs_rq's clock_task, but
> > + * here we use its own cfs_rq's for performance matter. But on load_avg
> > + * update, what we really care about is "the difference between two regular
> > + * clock reads", not absolute time, so the variation should be neglectable.
> > + */
> 
> Yes, but the difference between two clock reads can differ vastly
> depending on which clock you read - if cfs_rq was throttled, but
> parent_cfs_rq was not, reading cfs_rq's clock will give you no time
> passing. That said I think that's probably what you want for cfs_rq's
> load_avg, but is wrong for the group se, which probably needs to use its
> parent's.

Yes, then I think I may have to fall back to track group se load_avg alone.

> > +/* Update task and its cfs_rq load average */
> > +static inline void update_load_avg(struct sched_entity *se, int update_tg)
> >  {
> > +	struct cfs_rq *cfs_rq = cfs_rq_of(se);
> > +	u64 now = cfs_rq_clock_task(cfs_rq);
> > +
> >  	/*
> > +	 * Track task load average for carrying it to new CPU after migrated
> >  	 */
> > +	if (entity_is_task(se))
> > +		__update_load_avg(now, &se->avg, se->on_rq * se->load.weight);
> >  
> > +	update_cfs_rq_load_avg(now, cfs_rq);
> >  
> > +	if (update_tg)
> > +		update_tg_load_avg(cfs_rq);
> >  }
> 
> I personally find this very confusing, in that update_load_avg is doing
> more to se->cfs_rq, and in fact on anything other than a task, it isn't
> touching the se at all (instead, it touches _se->parent_ even).
 
What is confusing? The naming?

About the overflow problem, maybe I can just fall back to do load_avg / 47742
for every update, then everything would be in nature the same range with the
current code.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ