lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 9 Aug 2022 17:04:07 +0300
From:   Tariq Toukan <ttoukan.linux@...il.com>
To:     Valentin Schneider <vschneid@...hat.com>,
        Tariq Toukan <tariqt@...dia.com>,
        "David S. Miller" <davem@...emloft.net>,
        Saeed Mahameed <saeedm@...dia.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>
Cc:     Eric Dumazet <edumazet@...gle.com>,
        Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org,
        Gal Pressman <gal@...dia.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next V4 1/3] sched/topology: Add NUMA-based CPUs
 spread API



On 8/9/2022 3:52 PM, Valentin Schneider wrote:
> On 09/08/22 13:18, Tariq Toukan wrote:
>> On 8/9/2022 1:02 PM, Valentin Schneider wrote:
>>>
>>> Are there cases where we can't figure this out in advance? From what I grok
>>> out of the two callsites you patched, all vectors will be used unless some
>>> error happens, so compressing the CPUs in a single cpumask seemed
>>> sufficient.
>>>
>>
>> All vectors will be initialized to support the maximum number of traffic
>> rings. However, the actual number of traffic rings can be controlled and
>> set to a lower number N_actual < N. In this case, we'll be using only
>> N_actual instances and we want them to be the first/closest.
> 
> Ok, that makes sense, thank you.
> 
> In that case I wonder if we'd want a public-facing iterator for
> sched_domains_numa_masks[%i][node], rather than copy a portion of
> it. Something like the below (naming and implementation haven't been
> thought about too much).
> 
>    const struct cpumask *sched_numa_level_mask(int node, int level)
>    {
>            struct cpumask ***masks = rcu_dereference(sched_domains_numa_masks);
> 
>            if (node >= nr_node_ids || level >= sched_domains_numa_levels)
>                    return NULL;
> 
>            if (!masks)
>                    return NULL;
> 
>            return masks[level][node];
>    }
>    EXPORT_SYMBOL_GPL(sched_numa_level_mask);
> 

The above can be kept static, and expose only the foo() function below, 
similar to my sched_cpus_set_spread().

LGTM.
How do you suggest to proceed?
You want to formalize it? Or should I take it from here?


>    #define for_each_numa_level_mask(node, lvl, mask)	    \
>            for (mask = sched_numa_level_mask(node, lvl); mask;	\
>                 mask = sched_numa_level_mask(node, ++lvl))
> 
>    void foo(int node, int cpus[], int ncpus)
>    {
>            const struct cpumask *mask;
>            int lvl = 0;
>            int i = 0;
>            int cpu;
> 
>            rcu_read_lock();
>            for_each_numa_level_mask(node, lvl, mask) {
>                    for_each_cpu(cpu, mask) {
>                            cpus[i] = cpu;
>                            if (++i == ncpus)
>                                    goto done;
>                    }
>            }
>    done:
>            rcu_read_unlock();
>    }
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ