lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e5eed9c9-afd4-e88c-dbee-7fef2e76043a@bytedance.com>
Date:   Mon, 11 Jul 2022 18:28:09 +0800
From:   Hao Jia <jiahao.os@...edance.com>
To:     Valentin Schneider <vschneid@...hat.com>, mingo@...hat.com,
        peterz@...radead.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com
Cc:     linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH] sched/topology: Optimized copy default
 topology in sched_init_numa()



On 2022/7/4 Valentin Schneider wrote:
> 
> It's not a very hot path but I guess this lets you shave off a bit of boot
> time... While you're at it, you could add an early
Thanks for your time and suggestion.
> 
>    if (nr_node_ids == 1)
>            return;
> 

This will cause the values of sched_domains_numa_levels and 
sched_max_numa_distance to be different from before, and 
sched_domains_numa_levels may cause the return value of 
sched_numa_find_closest() to be different.
I'm not sure if it will cause problems.

> since !NUMA systems still go through sched_init_numa() if they have a
> kernel with CONFIG_NUMA (which should be most of them nowdays) and IIRC
> they end up with an unused NODE topology level.
> 

I'm confused why most !NUMA systems enable CONFIG_NUMA in the kernel?
Maybe for scalability?


> Regardless:
> 
> Reviewed-by: Valentin Schneider <vschneid@...hat.com>
> 

How about this?

The size of struct sched_domain_topology_level is 64 bytes.
For NUMA platforms, almost all are multi-core (enable CONFIG_SCHED_MC),
That is to say, the default_topology array has at least 128 bytes that
need to be copied in sched_init_numa(). For most x86 platforms,
CONFIG_SCHED_SMT will be enabled, so more copies will be required.

And memcpy() will be optimized under different architectures.
Fortunately, for platforms with CONFIG_NUMA enabled,
these optimizations are likely to be used.
So, let's use memcpy() to copy default topology in sched_init_numa().

Tests are done in an Intel Xeon(R) Platinum 8260 CPU@...0GHz machine
with 2 NUMA nodes each of which has 24 cores with SMT2 enabled, so 96
CPUs in total.

Use RDTSC to count time-consuming, and based on 5.19-rc4.

Enable CONFIG_SCHED_SMT && CONFIG_SCHED_CLUSTER && CONFIG_SCHED_MC,
So the default_topology array has 256 bytes that need to be copied
in sched_init_numa().
                      5.19-rc4   5.19-rc4 with patch
average tsc ticks    516.57      85.33   (-83.48%*)

Enable CONFIG_SCHED_MC, So the default_topology array has
128 bytes that need to be copied in sched_init_numa().
                      5.19-rc4   5.19-rc4 with patch
average tsc ticks    65.71       55.00   (-16.30%*)

since !NUMA systems still go through sched_init_numa() if they have a
kernel with CONFIG_NUMA (which should be most of them nowdays) and we
can skip copying and memory allocation of useless default topology.

Suggested-by: Valentin Schneider <vschneid@...hat.com>
Signed-off-by: Hao Jia <jiahao.os@...edance.com>
---
  kernel/sched/topology.c | 7 +++++--
  1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 05b6c2ad90b9..c439e58f22b9 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1907,6 +1907,9 @@ void sched_init_numa(int offline_node)
  	}
  	rcu_assign_pointer(sched_domains_numa_masks, masks);

+	if (nr_node_ids == 1)
+		goto skip;
+
  	/* Compute default topology size */
  	for (i = 0; sched_domain_topology[i].mask; i++);

@@ -1918,8 +1921,7 @@ void sched_init_numa(int offline_node)
  	/*
  	 * Copy the default topology bits..
  	 */
-	for (i = 0; sched_domain_topology[i].mask; i++)
-		tl[i] = sched_domain_topology[i];
+	memcpy(tl, sched_domain_topology, sizeof(struct 
sched_domain_topology_level) * i);

  	/*
  	 * Add the NUMA identity distance, aka single NODE.
@@ -1946,6 +1948,7 @@ void sched_init_numa(int offline_node)
  	sched_domain_topology_saved = sched_domain_topology;
  	sched_domain_topology = tl;

+skip:
  	sched_domains_numa_levels = nr_levels;
  	WRITE_ONCE(sched_max_numa_distance, 
sched_domains_numa_distance[nr_levels - 1]);


thanks,
Hao

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ