lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b6737247-ca02-e197-70c7-020952d95c1b@arm.com>
Date:   Fri, 30 Apr 2021 15:00:00 +0200
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Vincent Guittot <vincent.guittot@...aro.org>,
        Quentin Perret <qperret@...gle.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Qais Yousef <qais.yousef@....com>,
        Android Kernel Team <kernel-team@...roid.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Patrick Bellasi <patrick.bellasi@...bug.net>
Subject: Re: [PATCH v2] sched: Fix out-of-bound access in uclamp

On 30/04/2021 14:03, Vincent Guittot wrote:
> On Fri, 30 Apr 2021 at 11:40, Quentin Perret <qperret@...gle.com> wrote:
>>
>> On Friday 30 Apr 2021 at 10:49:50 (+0200), Vincent Guittot wrote:
>>> 20 buckets is probably not the best example because of the rounding of
>>> the division. With 16 buckets, each bucket should be exactly 64 steps
>>> large except the last one which will have 65 steps because of the
>>> value 1024. With your change, buckets will be 65 large and the last
>>> one will be only 49 large
>>
>> OK, so what do you think of this?
> 
> Looks good to me

+1

>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index c5fb230dc604..dceeb5821797 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -920,14 +920,14 @@ static struct uclamp_se uclamp_default[UCLAMP_CNT];
>>   */
>>  DEFINE_STATIC_KEY_FALSE(sched_uclamp_used);
>>
>> -#define UCLAMP_BUCKET_DELTA (SCHED_CAPACITY_SCALE / UCLAMP_BUCKETS + 1)
>> +#define UCLAMP_BUCKET_DELTA DIV_ROUND_CLOSEST(SCHED_CAPACITY_SCALE, UCLAMP_BUCKETS)
>>
>>  #define for_each_clamp_id(clamp_id) \
>>         for ((clamp_id) = 0; (clamp_id) < UCLAMP_CNT; (clamp_id)++)
>>
>>  static inline unsigned int uclamp_bucket_id(unsigned int clamp_value)
>>  {
>> -       return clamp_value / UCLAMP_BUCKET_DELTA;
>> +       return min(clamp_value / UCLAMP_BUCKET_DELTA, UCLAMP_BUCKETS - 1);

IMHO, this asks for

min_t(unsigned int, clamp_value/UCLAMP_BUCKET_DELTA, UCLAMP_BUCKETS-1);

>>  }
>>
>>  static inline unsigned int uclamp_none(enum uclamp_id clamp_id)

Looks like this will fix a lot of possible configs:

nbr buckets 1-4, 7-8, 10-12, 14-17, *20*, 26, 29-32 ...

We would still introduce larger last buckets, right?

Examples:

nbr_buckets 	delta	last bucket size

20 		51	 +5 = 56

26		39	+10 = 49

29		35	 +9 = 44

...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ