lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 20 Aug 2015 04:11:06 +0800
From:	Yuyang Du <yuyang.du@...el.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	byungchul.park@....com, mingo@...nel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 4/5] sched: sync a se with its cfs_rq when switching
 sched class to fair class

On Wed, Aug 19, 2015 at 07:12:41PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 19, 2015 at 03:47:15PM +0900, byungchul.park@....com wrote:
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 1be042a..3419f6c 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -2711,6 +2711,17 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg)
> >  
> >  static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
> >  {
> > +	/*
> > +	 * in case of migration and cgroup-change, more care should be taken
> > +	 * because se's cfs_rq was changed, that means calling __update_load_avg
> > +	 * with new cfs_rq->avg.last_update_time is meaningless. so we skip the
> > +	 * update here. we have to update it with prev cfs_rq just before changing
> > +	 * se's cfs_rq, and get here soon.
> > +	 */
> > +	if (se->avg.last_update_time)
> > +		__update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)),
> > +				&se->avg, 0, 0, NULL);
> > +
> >  	se->avg.last_update_time = cfs_rq->avg.last_update_time;
> >  	cfs_rq->avg.load_avg += se->avg.load_avg;
> >  	cfs_rq->avg.load_sum += se->avg.load_sum;
> 
> you seem to have forgotten to remove the same logic from
> enqueue_entity_load_avg(), which will now call __update_load_avg()
> twice.

In case of enqueue_entity_load_avg(), that seems to be ok.

However, the problem is that he made it "entangled":

In enqueue_entity_load_avg():

	if (migrated)
		attach_entity_load_avg();

while in attach_entity_load_avg():

	if (!migrated)
		__update_load_avg();

so, if attach() is called from enqueue(), that if() is never true.

To Byungchul,

1) I suggest you not entangle the entire series by mixing problem
   sovling with code manipulating. That said, it is better you
   first solve the "move between task group" problem and the
   switch_to/from problem (if it is a problem, either way, comment
   with your explanation to how you deal with the lost record and why).
2) After that, make the code cleaner, without change to logic, especially
   avoid entangling the logic in order to do the code manipulation.
3) If you don't hate upper case letter, use it properly.

If it helps.

Thanks,
Yuyang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ