lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b30e5815-441c-b4d3-85ad-65a4020f6d93@arm.com>
Date:   Thu, 29 Apr 2021 14:34:14 +0200
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Quentin Perret <qperret@...gle.com>, mingo@...hat.com,
        peterz@...radead.org, vincent.guittot@...aro.org,
        juri.lelli@...hat.com
Cc:     rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com, qais.yousef@....com, kernel-team@...roid.com,
        linux-kernel@...r.kernel.org, patrick.bellasi@...bug.net
Subject: Re: [PATCH] sched: Fix out-of-bound access in uclamp

On 28/04/2021 19:27, Quentin Perret wrote:
> Util-clamp places tasks in different buckets based on their clamp values
> for performance reasons. However, the size of buckets is currently
> computed using a rounding division, which can lead to an off-by-one
> error in some configurations.
> 
> For instance, with 20 buckets, the bucket size will be 1024/20=51.2,
> rounded to the closest value: 51. Now, a task with a clamp of 1024 (as
> is the default for the min clamp of RT tasks) will be mapped to bucket
> id 1024/51=20 as we're now using a standard integer division. Sadly,
> correct indexes are in range [0,19], hence leading to an out of bound
> memory access.
> 
> Fix this by using a rounding-up division when computing the bucket size.

But in case you use e.g. 16 buckets, wouldn't you still end up with this
task mapped into bucket_id=16?

1024/16=64

1024/64=16

> 
> Fixes: 69842cba9ace ("sched/uclamp: Add CPU's clamp buckets refcounting")
> Suggested-by: Qais Yousef <qais.yousef@....com>
> Signed-off-by: Quentin Perret <qperret@...gle.com>
> 
> ---
> 
> This was found thanks to the SCHED_WARN_ON() in uclamp_rq_dec_id() which
> indicated a broken state while running with 20 buckets on Android.
> 
> Big thanks to Qais for the help with this one.
> ---
>  kernel/sched/core.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 98191218d891..ec175909e8b0 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -920,8 +920,7 @@ static struct uclamp_se uclamp_default[UCLAMP_CNT];
>   */
>  DEFINE_STATIC_KEY_FALSE(sched_uclamp_used);
>  
> -/* Integer rounded range for each bucket */
> -#define UCLAMP_BUCKET_DELTA DIV_ROUND_CLOSEST(SCHED_CAPACITY_SCALE, UCLAMP_BUCKETS)
> +#define UCLAMP_BUCKET_DELTA DIV_ROUND_UP(SCHED_CAPACITY_SCALE, UCLAMP_BUCKETS)
>  
>  #define for_each_clamp_id(clamp_id) \
>  	for ((clamp_id) = 0; (clamp_id) < UCLAMP_CNT; (clamp_id)++)
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ