[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-1a010e29cfa00fee2888fd2fd4983f848cbafb58@git.kernel.org>
Date: Fri, 19 Apr 2019 05:16:09 -0700
From: tip-bot for Konstantin Khlebnikov <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: peterz@...radead.org, a.p.zijlstra@...llo.nl, hpa@...or.com,
linux-kernel@...r.kernel.org, khlebnikov@...dex-team.ru,
mingo@...nel.org, torvalds@...ux-foundation.org, tglx@...utronix.de
Subject: [tip:sched/core] sched/rt: Check integer overflow at usec to nsec
conversion
Commit-ID: 1a010e29cfa00fee2888fd2fd4983f848cbafb58
Gitweb: https://git.kernel.org/tip/1a010e29cfa00fee2888fd2fd4983f848cbafb58
Author: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
AuthorDate: Wed, 27 Feb 2019 11:10:17 +0300
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Fri, 19 Apr 2019 13:42:09 +0200
sched/rt: Check integer overflow at usec to nsec conversion
Example of unhandled overflows:
# echo 18446744073709651 > cpu.rt_runtime_us
# cat cpu.rt_runtime_us
99
# echo 18446744073709900 > cpu.rt_period_us
# cat cpu.rt_period_us
348
After this patch they will fail with -EINVAL.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
Acked-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/155125501739.293431.5252197504404771496.stgit@buzz
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/rt.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 90fa23d36565..1e6b909dca36 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2555,6 +2555,8 @@ int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
rt_runtime = (u64)rt_runtime_us * NSEC_PER_USEC;
if (rt_runtime_us < 0)
rt_runtime = RUNTIME_INF;
+ else if ((u64)rt_runtime_us > U64_MAX / NSEC_PER_USEC)
+ return -EINVAL;
return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
}
@@ -2575,6 +2577,9 @@ int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us)
{
u64 rt_runtime, rt_period;
+ if (rt_period_us > U64_MAX / NSEC_PER_USEC)
+ return -EINVAL;
+
rt_period = rt_period_us * NSEC_PER_USEC;
rt_runtime = tg->rt_bandwidth.rt_runtime;
Powered by blists - more mailing lists