[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170523185222.sewatquohs3vuped@hirez.programming.kicks-ass.net>
Date: Tue, 23 May 2017 20:52:22 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Juri Lelli <juri.lelli@....com>
Cc: mingo@...hat.com, rjw@...ysocki.net, viresh.kumar@...aro.org,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
tglx@...utronix.de, vincent.guittot@...aro.org,
rostedt@...dmis.org, luca.abeni@...tannapisa.it,
claudio@...dence.eu.com, tommaso.cucinotta@...tannapisa.it,
bristot@...hat.com, mathieu.poirier@...aro.org, tkjos@...roid.com,
joelaf@...gle.com, andresoportus@...gle.com,
morten.rasmussen@....com, dietmar.eggemann@....com,
patrick.bellasi@....com, Ingo Molnar <mingo@...nel.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>
Subject: Re: [PATCH RFC 3/8] sched/cpufreq_schedutil: make worker kthread be
SCHED_DEADLINE
On Tue, May 23, 2017 at 09:53:46AM +0100, Juri Lelli wrote:
> diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h
> index e2a6c7b3510b..72723859ef74 100644
> --- a/include/uapi/linux/sched.h
> +++ b/include/uapi/linux/sched.h
> @@ -48,5 +48,6 @@
> */
> #define SCHED_FLAG_RESET_ON_FORK 0x01
> #define SCHED_FLAG_RECLAIM 0x02
> +#define SCHED_FLAG_SPECIAL 0x04
>
> #endif /* _UAPI_LINUX_SCHED_H */
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 7fc2011c3ce7..ba57e2ef9aef 100644
> @@ -4205,7 +4212,9 @@ static int __sched_setscheduler(struct task_struct *p,
> }
>
> if (attr->sched_flags &
> - ~(SCHED_FLAG_RESET_ON_FORK | SCHED_FLAG_RECLAIM))
> + ~(SCHED_FLAG_RESET_ON_FORK |
> + SCHED_FLAG_RECLAIM |
> + SCHED_FLAG_SPECIAL))
> return -EINVAL;
>
> /*
Could we pretty please not expose this gruesome hack to userspace?
So if you stick it in attr->sched_flags, use a high bit and don't put it
in a uapi header. Also make the flags check explicitly fail on it when
@user. Such that only _nocheck() (and thus kernel) callers have access
to it.
Also, there's not nearly enough warnings and other derisory comments
near it.
Powered by blists - more mailing lists