lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 6 Jun 2016 21:47:52 +0000 (UTC)
From:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:	Julien Desfossez <jdesfossez@...icios.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 1/2] sched: encapsulate priority changes in a
 sched_set_prio static function

----- On May 30, 2016, at 9:18 AM, Mathieu Desnoyers mathieu.desnoyers@...icios.com wrote:

> ----- On May 27, 2016, at 5:16 PM, Julien Desfossez jdesfossez@...icios.com
> wrote:
> 
>> Currently, the priority of tasks is modified directly in the scheduling
>> functions. Encapsulate priority updates to enable instrumentation of
>> priority changes. This will enable analysis of real-time scheduling
>> delays per thread priority, which cannot be performed accurately if we
>> only trace the priority of the currently scheduled processes.
>> 
>> The call sites that modify the priority of a task are mostly system
>> calls: sched_setscheduler, sched_setattr, sched_process_fork and
>> set_user_nice. Priority can also be dynamically boosted through
>> priority inheritance of rt_mutex by rt_mutex_setprio.
>> 
>> Signed-off-by: Julien Desfossez <jdesfossez@...icios.com>
> 
> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>

CCing Ingo and Peter on the first patch of the series too,
so they can let us know if we missed anything fundamental
related to sched_deadline.

Thanks,

Mathieu

> 
>> ---
>> include/linux/sched.h |  3 ++-
>> kernel/sched/core.c   | 19 +++++++++++++------
>> 2 files changed, 15 insertions(+), 7 deletions(-)
>> 
>> diff --git a/include/linux/sched.h b/include/linux/sched.h
>> index 52c4847..48b35c0 100644
>> --- a/include/linux/sched.h
>> +++ b/include/linux/sched.h
>> @@ -1409,7 +1409,8 @@ struct task_struct {
>> #endif
>> 	int on_rq;
>> 
>> -	int prio, static_prio, normal_prio;
>> +	int prio; /* Updated through sched_set_prio() */
>> +	int static_prio, normal_prio;
>> 	unsigned int rt_priority;
>> 	const struct sched_class *sched_class;
>> 	struct sched_entity se;
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index d1f7149..6946b8f 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -2230,6 +2230,11 @@ int sysctl_schedstats(struct ctl_table *table, int write,
>> #endif
>> #endif
>> 
>> +static void sched_set_prio(struct task_struct *p, int prio)
>> +{
>> +	p->prio = prio;
>> +}
>> +
>> /*
>>  * fork()/clone()-time setup:
>>  */
>> @@ -2249,7 +2254,7 @@ int sched_fork(unsigned long clone_flags, struct
>> task_struct *p)
>> 	/*
>> 	 * Make sure we do not leak PI boosting priority to the child.
>> 	 */
>> -	p->prio = current->normal_prio;
>> +	sched_set_prio(p, current->normal_prio);
>> 
>> 	/*
>> 	 * Revert to default priority/policy on fork if requested.
>> @@ -2262,7 +2267,8 @@ int sched_fork(unsigned long clone_flags, struct
>> task_struct *p)
>> 		} else if (PRIO_TO_NICE(p->static_prio) < 0)
>> 			p->static_prio = NICE_TO_PRIO(0);
>> 
>> -		p->prio = p->normal_prio = __normal_prio(p);
>> +		p->normal_prio = __normal_prio(p);
>> +		sched_set_prio(p, p->normal_prio);
>> 		set_load_weight(p);
>> 
>> 		/*
>> @@ -3477,7 +3483,7 @@ void rt_mutex_setprio(struct task_struct *p, int prio)
>> 		p->sched_class = &fair_sched_class;
>> 	}
>> 
>> -	p->prio = prio;
>> +	sched_set_prio(p, prio);
>> 
>> 	if (running)
>> 		p->sched_class->set_curr_task(rq);
>> @@ -3524,7 +3530,7 @@ void set_user_nice(struct task_struct *p, long nice)
>> 	p->static_prio = NICE_TO_PRIO(nice);
>> 	set_load_weight(p);
>> 	old_prio = p->prio;
>> -	p->prio = effective_prio(p);
>> +	sched_set_prio(p, effective_prio(p));
>> 	delta = p->prio - old_prio;
>> 
>> 	if (queued) {
>> @@ -3731,9 +3737,10 @@ static void __setscheduler(struct rq *rq, struct
>> task_struct *p,
>> 	 * sched_setscheduler().
>> 	 */
>> 	if (keep_boost)
>> -		p->prio = rt_mutex_get_effective_prio(p, normal_prio(p));
>> +		sched_set_prio(p, rt_mutex_get_effective_prio(p,
>> +					normal_prio(p)));
>> 	else
>> -		p->prio = normal_prio(p);
>> +		sched_set_prio(p, normal_prio(p));
>> 
>> 	if (dl_prio(p->prio))
>> 		p->sched_class = &dl_sched_class;
>> --
>> 1.9.1
> 
> --
> Mathieu Desnoyers
> EfficiOS Inc.
> http://www.efficios.com

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ