lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtAfgVnMYkY_M+SDmNQDb_EsxSctQw-fkt2WJhczZakOjQ@mail.gmail.com>
Date:   Fri, 30 Apr 2021 09:45:32 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Quentin Perret <qperret@...gle.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Qais Yousef <qais.yousef@....com>,
        Android Kernel Team <kernel-team@...roid.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Patrick Bellasi <patrick.bellasi@...bug.net>
Subject: Re: [PATCH v2] sched: Fix out-of-bound access in uclamp

On Thu, 29 Apr 2021 at 17:27, Quentin Perret <qperret@...gle.com> wrote:
>
> Util-clamp places tasks in different buckets based on their clamp values
> for performance reasons. However, the size of buckets is currently
> computed using a rounding division, which can lead to an off-by-one
> error in some configurations.
>
> For instance, with 20 buckets, the bucket size will be 1024/20=51. A
> task with a clamp of 1024 will be mapped to bucket id 1024/51=20. Sadly,
> correct indexes are in range [0,19], hence leading to an out of bound
> memory access.
>
> Fix the math to compute the bucket size.
>
> Fixes: 69842cba9ace ("sched/uclamp: Add CPU's clamp buckets refcounting")
> Suggested-by: Qais Yousef <qais.yousef@....com>
> Signed-off-by: Quentin Perret <qperret@...gle.com>
>
> ---
>
> Changes in v2:
>  - replaced the DIV_ROUND_UP(a,b) with a/b+1 (Dietmar)

Doesn't this create unfairness between buckets ?

If we take your example above of 20 buckets, delta is now 52. Then we
expect the last bucket to get the range [972-1024] but values lower
than 988 will go in the idx 18. And the more bucket you will have, the
worse it will be

Your problem comes from the fact that we use 1025 values instead of
1024. Wouldn't it be easier to have a special condition for
SCHED_CAPACITY_SCALE value

>
> This was found thanks to the SCHED_WARN_ON() in uclamp_rq_dec_id() which
> indicated a broken state while running with 20 buckets on Android.
>
> Big thanks to Qais for the help with this one.
> ---
>  kernel/sched/core.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 98191218d891..c5fb230dc604 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -920,8 +920,7 @@ static struct uclamp_se uclamp_default[UCLAMP_CNT];
>   */
>  DEFINE_STATIC_KEY_FALSE(sched_uclamp_used);
>
> -/* Integer rounded range for each bucket */
> -#define UCLAMP_BUCKET_DELTA DIV_ROUND_CLOSEST(SCHED_CAPACITY_SCALE, UCLAMP_BUCKETS)
> +#define UCLAMP_BUCKET_DELTA (SCHED_CAPACITY_SCALE / UCLAMP_BUCKETS + 1)
>
>  #define for_each_clamp_id(clamp_id) \
>         for ((clamp_id) = 0; (clamp_id) < UCLAMP_CNT; (clamp_id)++)
> --
> 2.31.1.498.g6c1eba8ee3d-goog
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ