[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <xhsmhfshxbnbd.mognet@vschneid.remote.csb>
Date: Mon, 15 Aug 2022 15:20:38 +0100
From: Valentin Schneider <vschneid@...hat.com>
To: Tariq Toukan <ttoukan.linux@...il.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: Tariq Toukan <tariqt@...dia.com>,
"David S. Miller" <davem@...emloft.net>,
Saeed Mahameed <saeedm@...dia.com>,
Jakub Kicinski <kuba@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>, Gal Pressman <gal@...dia.com>,
Vincent Guittot <vincent.guittot@...aro.org>
Subject: Re: [PATCH 1/2] sched/topology: Introduce sched_numa_hop_mask()
On 14/08/22 11:19, Tariq Toukan wrote:
> The API is indeed easy to use, the driver part looks straight forward.
>
> I appreciate the tricks you used to make it work!
> However, the implementation is relatively complicated, not easy to read
> or understand, and touches several files. I do understand what you did
> here, but I guess not all respective maintainers will like it. Let's see.
>
Dumping it all into a single diff also doesn't help :-) I think the changes to
get a for_each_cpu_andnot() are straightforward enough, the one eyesore is
the macro but I consider it a necessary evil to get an allocation-free
interface.
> One alternative to consider, that will simplify things up, is switching
> back to returning an array of cpus, ordered by their distance, up to a
> provided argument 'npus'.
> This way, you will iterate over sched_numa_hop_mask() internally, easily
> maintaining the cpumask diffs between two hops, without the need of
> making it on-the-fly as part an an exposed for-loop macro.
>
That requires extra storage however: at the very least the array, and a
temp cpumask to remember already-visited CPUs (the alternative being
scanning the array every CPU iteration to figure out if it's been added
already).
I'm going to submit the cpumask / sched changes, hopefully I get to
something by the time you're back from PTO.
Powered by blists - more mailing lists