[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <39660e162b54f241cdb571e0029c26d4596ec8e0.camel@perches.com>
Date: Sat, 05 Mar 2022 09:13:21 -0800
From: Joe Perches <joe@...ches.com>
To: dann frazier <dann.frazier@...onical.com>, stable@...r.kernel.org
Cc: Miao Xie <miaox@...fujitsu.com>,
Valentin Schneider <valentin.schneider@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Barry Song <song.bao.hua@...ilicon.com>,
John Paul Adrian Glaubitz <glaubitz@...sik.fu-berlin.de>,
Sergei Trofimovich <slyfox@...too.org>,
Anatoly Pugachev <matorola@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5.10+5.4 2/3] sched/topology: Fix
sched_domain_topology_level alloc in sched_init_numa()
On Sat, 2022-03-05 at 09:44 -0700, dann frazier wrote:
> From: Dietmar Eggemann <dietmar.eggemann@....com>
>
> commit 71e5f6644fb2f3304fcb310145ded234a37e7cc1 upstream.
>
> Commit "sched/topology: Make sched_init_numa() use a set for the
> deduplicating sort" allocates 'i + nr_levels (level)' instead of
> 'i + nr_levels + 1' sched_domain_topology_level.
[]
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
[]
> @@ -1655,7 +1655,7 @@ void sched_init_numa(void)
> /* Compute default topology size */
> for (i = 0; sched_domain_topology[i].mask; i++);
Thanks.
Couple trivial notes:
A trailing semicolon in a for loop, "for (...);" can be error prone
and is also the only usage of that style in kernel/ path.
A more common usage might be:
i = 0;
while (sched_domain_topology[i].mask)
i++;
> - tl = kzalloc((i + nr_levels) *
> + tl = kzalloc((i + nr_levels + 1) *
> sizeof(struct sched_domain_topology_level), GFP_KERNEL);
kcalloc would be better, although the array is completely set
by the loop below so the zeroing isn't necessary.
Maybe use kmalloc_array.
Doubtful there's an overall impact though.
Powered by blists - more mailing lists