lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 11 Apr 2016 22:29:17 -0700 From: "Bill Huey (hui)" <bill.huey@...il.com> To: Peter Zijlstra <a.p.zijlstra@...llo.nl>, Steven Rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org Cc: Dario Faggioli <raistlin@...ux.it>, Alessandro Zummo <a.zummo@...ertech.it>, Thomas Gleixner <tglx@...utronix.de>, KY Srinivasan <kys@...rosoft.com>, Amir Frenkel <frenkel.amir@...il.com>, Bdale Garbee <bdale@....com> Subject: [PATCH RFC v0 09/12] Add priority support for the cyclic scheduler Initial bits to prevent priority changing of cyclic scheduler tasks by only allow them to be SCHED_FIFO. Fairly hacky at this time and will need revisiting because of the security concerns. Affects task death handling since it uses an additional scheduler class hook for clean up at death. Must be SCHED_FIFO. Signed-off-by: Bill Huey (hui) <bill.huey@...il.com> --- kernel/sched/core.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 44db0ff..cf6cf57 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -87,6 +87,10 @@ #include "../workqueue_internal.h" #include "../smpboot.h" +#ifdef CONFIG_RTC_CYCLIC +#include "cyclic.h" +#endif + #define CREATE_TRACE_POINTS #include <trace/events/sched.h> @@ -2074,6 +2078,10 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p) memset(&p->se.statistics, 0, sizeof(p->se.statistics)); #endif +#ifdef CONFIG_RTC_CYCLIC + RB_CLEAR_NODE(&p->rt.rt_overrun.node); +#endif + RB_CLEAR_NODE(&p->dl.rb_node); init_dl_task_timer(&p->dl); __dl_clear_params(p); @@ -3881,6 +3889,11 @@ recheck: if (dl_policy(policy)) return -EPERM; +#ifdef CONFIG_RTC_CYCLIC + if (rt_overrun_policy(p, policy)) + return -EPERM; +#endif + /* * Treat SCHED_IDLE as nice 20. Only allow a switch to * SCHED_NORMAL if the RLIMIT_NICE would normally permit it. -- 2.5.0
Powered by blists - more mailing lists