lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 20 Apr 2020 10:44:20 +0800 From: Huaixin Chang <changhuaixin@...ux.alibaba.com> To: linux-kernel@...r.kernel.org Cc: peterz@...radead.org, mingo@...hat.com, bsegall@...gle.com, chiluk+linux@...eed.com, vincent.guittot@...aro.org, pauld@...head.com, Huaixin Chang <changhuaixin@...ux.alibaba.com> Subject: [PATCH 1/2] sched: Defend cfs and rt bandwidth quota against overflow Kernel limitation on cpu.cfs_quota_us is insufficient. Some large numbers might cause overflow in to_ratio() calculation and produce unexpected results. For example, if we make two cpu cgroups and then write a reasonable value and a large value into child's and parent's cpu.cfs_quota_us. This will cause a write error. cd /sys/fs/cgroup/cpu mkdir parent; mkdir parent/child echo 8000 > parent/child/cpu.cfs_quota_us # 17592186044416 is (1UL << 44) echo 17592186044416 > parent/cpu.cfs_quota_us In this case, quota will overflow and thus fail the __cfs_schedulable check. Similar overflow also affects rt bandwidth. Signed-off-by: Huaixin Chang <changhuaixin@...ux.alibaba.com> --- kernel/sched/core.c | 8 ++++++++ kernel/sched/rt.c | 9 +++++++++ kernel/sched/sched.h | 2 ++ 3 files changed, 19 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3a61a3b8eaa9..f0a74e35c3f0 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7390,6 +7390,8 @@ static DEFINE_MUTEX(cfs_constraints_mutex); const u64 max_cfs_quota_period = 1 * NSEC_PER_SEC; /* 1s */ static const u64 min_cfs_quota_period = 1 * NSEC_PER_MSEC; /* 1ms */ +/* More than 203 days if BW_SHIFT equals 20. */ +static const u64 max_cfs_runtime = MAX_BW_USEC * NSEC_PER_USEC; static int __cfs_schedulable(struct task_group *tg, u64 period, u64 runtime); @@ -7417,6 +7419,12 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota) if (period > max_cfs_quota_period) return -EINVAL; + /* + * Bound quota to defend quota against overflow during bandwidth shift. + */ + if (quota != RUNTIME_INF && quota > max_cfs_runtime) + return -EINVAL; + /* * Prevent race between setting of cfs_rq->runtime_enabled and * unthrottle_offline_cfs_rqs(). diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index df11d88c9895..f5eea19d68c4 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -2569,6 +2569,9 @@ static int __rt_schedulable(struct task_group *tg, u64 period, u64 runtime) return ret; } +/* More than 203 days if BW_SHIFT equals 20. */ +static const u64 max_rt_runtime = MAX_BW_USEC * NSEC_PER_USEC; + static int tg_set_rt_bandwidth(struct task_group *tg, u64 rt_period, u64 rt_runtime) { @@ -2585,6 +2588,12 @@ static int tg_set_rt_bandwidth(struct task_group *tg, if (rt_period == 0) return -EINVAL; + /* + * Bound quota to defend quota against overflow during bandwidth shift. + */ + if (rt_runtime != RUNTIME_INF && rt_runtime > max_rt_runtime) + return -EINVAL; + mutex_lock(&rt_constraints_mutex); err = __rt_schedulable(tg, rt_period, rt_runtime); if (err) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index db3a57675ccf..6f6b7f545557 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1918,6 +1918,8 @@ extern void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se); #define BW_SHIFT 20 #define BW_UNIT (1 << BW_SHIFT) #define RATIO_SHIFT 8 +#define MAX_BW_BITS (64 - BW_SHIFT) +#define MAX_BW_USEC ((1UL << MAX_BW_BITS) - 1) unsigned long to_ratio(u64 period, u64 runtime); extern void init_entity_runnable_average(struct sched_entity *se); -- 2.14.4.44.g2045bb6
Powered by blists - more mailing lists