lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150813074600.GB16853@twins.programming.kicks-ass.net>
Date:	Thu, 13 Aug 2015 09:46:00 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	byungchul.park@....com
Cc:	mingo@...nel.org, linux-kernel@...r.kernel.org, yuyang.du@...el.com
Subject: Re: [PATCH] sched: sync with the cfs_rq when changing sched class

On Thu, Aug 13, 2015 at 02:55:55PM +0900, byungchul.park@....com wrote:
> @@ -8023,16 +8036,7 @@ static void task_move_group_fair(struct task_struct *p, int queued)
>  
>  #ifdef CONFIG_SMP
>  	/* synchronize task with its prev cfs_rq */
> -	if (!queued)
> -		__update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)),
> -				&se->avg, se->on_rq * scale_load_down(se->load.weight),
> -				cfs_rq->curr == se, NULL);
> -
> -	/* remove our load when we leave */
> -	cfs_rq->avg.load_avg = max_t(long, cfs_rq->avg.load_avg - se->avg.load_avg, 0);
> -	cfs_rq->avg.load_sum = max_t(s64, cfs_rq->avg.load_sum - se->avg.load_sum, 0);
> -	cfs_rq->avg.util_avg = max_t(long, cfs_rq->avg.util_avg - se->avg.util_avg, 0);
> -	cfs_rq->avg.util_sum = max_t(s32, cfs_rq->avg.util_sum - se->avg.util_sum, 0);
> +	detach_entity_load_avg(cfs_rq, se);
>  #endif
>  	set_task_rq(p, task_cpu(p));
>  	se->depth = se->parent ? se->parent->depth + 1 : 0;
> @@ -8042,11 +8046,7 @@ static void task_move_group_fair(struct task_struct *p, int queued)
>  
>  #ifdef CONFIG_SMP
>  	/* Virtually synchronize task with its new cfs_rq */
> -	p->se.avg.last_update_time = cfs_rq->avg.last_update_time;
> -	cfs_rq->avg.load_avg += p->se.avg.load_avg;
> -	cfs_rq->avg.load_sum += p->se.avg.load_sum;
> -	cfs_rq->avg.util_avg += p->se.avg.util_avg;
> -	cfs_rq->avg.util_sum += p->se.avg.util_sum;
> +	attach_entity_load_avg(cfs_rq, se);
>  #endif
>  }

Can't we go one further and do:

static void task_move_group_fair(struct task_struct *p)
{
	struct rq *rq = task_rq(p);

	switched_from_fair(rq, p);
	set_task_rq(p, task_cpu(p);
	switched_to_fair(rq, p);
}

switched_from already does the vruntime and load_avg thing,
switched_to should do the reverse, although it currently doesn't appear
to put the load_avg back.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ