lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xhsmhtu7m72fr.mognet@vschneid.remote.csb>
Date:   Tue, 12 Jul 2022 16:53:12 +0100
From:   Valentin Schneider <vschneid@...hat.com>
To:     Hao Jia <jiahao.os@...edance.com>, mingo@...hat.com,
        peterz@...radead.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com
Cc:     linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH] sched/topology: Optimized copy default
 topology in sched_init_numa()

On 11/07/22 18:28, Hao Jia wrote:
> On 2022/7/4 Valentin Schneider wrote:
>>
>> It's not a very hot path but I guess this lets you shave off a bit of boot
>> time... While you're at it, you could add an early
> Thanks for your time and suggestion.
>>
>>    if (nr_node_ids == 1)
>>            return;
>>
>
> This will cause the values of sched_domains_numa_levels and
> sched_max_numa_distance to be different from before, and
> sched_domains_numa_levels may cause the return value of
> sched_numa_find_closest() to be different.
> I'm not sure if it will cause problems.
>

True, we need to be careful here, but those are all static so they get
initialized to sensible defaults (zero / NULL pointer).

sched_numa_find_closest() will return nr_cpu_ids which make sense, so I
think we can get away with an early return

>> since !NUMA systems still go through sched_init_numa() if they have a
>> kernel with CONFIG_NUMA (which should be most of them nowdays) and IIRC
>> they end up with an unused NODE topology level.
>>
>
> I'm confused why most !NUMA systems enable CONFIG_NUMA in the kernel?
> Maybe for scalability?
>

It just makes things easier on a distribution point of view - just ship a
single kernel image everyone can use, rather than N different images for N
different types of systems.

AFAIA having CONFIG_NUMA on an UMA (!NUMA) system isn't bad, it just adds
more things in the sched_domain_topology during boot time which end up
being unused.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ