[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <155125501891.293431.3345233332801109696.stgit@buzz>
Date: Wed, 27 Feb 2019 11:10:18 +0300
From: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org
Subject: [PATCH 2/3] sched/core: handle overflow in cpu_shares_write_u64
Bit shift in scale_load() could overflow shares. This patch saturates
it to MAX_SHARES like following sched_group_set_shares().
Example:
# echo 9223372036854776832 > cpu.shares
# cat cpu.shares
Before patch: 1024
After pattch: 262144
Signed-off-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
---
kernel/sched/core.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d8d76a65cfdd..a626fbd9fdc7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6502,6 +6502,8 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset)
static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
struct cftype *cftype, u64 shareval)
{
+ if (shareval > scale_load_down(ULONG_MAX))
+ shareval = MAX_SHARES;
return sched_group_set_shares(css_tg(css), scale_load(shareval));
}
Powered by blists - more mailing lists