lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260120113246.27987-1-kprateek.nayak@amd.com>
Date: Tue, 20 Jan 2026 11:32:38 +0000
From: K Prateek Nayak <kprateek.nayak@....com>
To: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
	Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot
	<vincent.guittot@...aro.org>, <linux-kernel@...r.kernel.org>
CC: Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt
	<rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman
	<mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>, Chen Yu
	<yu.c.chen@...el.com>, Shrikanth Hegde <sshegde@...ux.ibm.com>, "Gautham R.
 Shenoy" <gautham.shenoy@....com>, K Prateek Nayak <kprateek.nayak@....com>
Subject: [PATCH v3 0/8] sched/topology: Optimize sd->shared allocation

Discussed at LPC'25, the allocation of per-CPU "sched_domain_shared"
objects for each topology level was found to be unnecessary since only
"sd_llc_shared" is ever used by the scheduler and rest is either
reclaimed during __sdt_free() or remain allocated without any purpose.

Folks are already optimizing for unnecessary sched domain allocations
with commit f79c9aa446d6 ("x86/smpboot: avoid SMT domain attach/destroy
if SMT is not enabled") removing the SMT level entirely on the x86 side
when it is know that the domain will be degenerated anyways by the
scheduler.

This goes one step ahead with the "sched_domain_shared" allocations by
moving it out of "sd_data" which is allocated for every topology level
and into "s_data" instead which is allocated once per partition.

"sd->shared" is only allocated for the topmost SD_SHARE_LLC domain and
the topology layer uses the sched domain degeneration path to pass the
reference to the final "sd_llc" domain. Since degeneration of parent
ensures 1:1 mapping between the span with the child, and the fact that
SD_SHARE_LLC domains never overlap, degeneration of an SD_SAHRE_LLC
domain either means its span is same as that of its child or that it
only contains a single CPU making it redundant.

Since the topology layer also checks for the existence of a valid
"sd->shared" when "sd_llc" is present, the handling of "sd_llc_shared"
can also be simplified when a reference to "sd_llc" is already present
in the scope (Patch 7 and Patch 8).

Patches are based on top of:

  git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core

at commit 5d86d542f68f ("sched/fair: Remove nohz.nr_cpus and use weight
of cpumask instead")
---
Changelog rfc v2..v3:

o Broke off the "sd->shared" assignment optimization into a separate
  series for easier review.

o Spotted a case of incorrect calculation of load balancing periods
  in presence of cpuset partitions (Patch 1).

o Broke off the single "sd->shared" assignment optimization patch into
  3 parts for easier review (Patch 2 - Patch 4). The "Reviewed-by:" tag
  from Gautham was dropped as a result.

o Building on recent effort from Peter to remove the superfluous usage
  of rcu_read_lock() in !preemptible() regions, Patch5 and Patch 6
  cleans up the fair task's wakeup path before adding more cleanups in
  Patch 7 and Patch 8.

o Dropped the RFC tag.

v2: https://lore.kernel.org/lkml/20251208083602.31898-1-kprateek.nayak@amd.com/
---
K Prateek Nayak (8):
  sched/topology: Compute sd_weight considering cpuset partitions
  sched/topology: Allocate per-CPU sched_domain_shared in s_data
  sched/topology: Switch to assigning "sd->shared" from s_data
  sched/topology: Remove sched_domain_shared allocation with sd_data
  sched/core: Check for rcu_read_lock_any_held() in idle_get_state()
  sched/fair: Remove superfluous rcu_read_lock() in the wakeup path
  sched/fair: Simplify the entry condition for update_idle_cpu_scan()
  sched/fair: Simplify SIS_UTIL handling in select_idle_cpu()

 include/linux/sched/topology.h |   1 -
 kernel/sched/fair.c            |  62 +++++++-----------
 kernel/sched/sched.h           |   2 +-
 kernel/sched/topology.c        | 111 ++++++++++++++++++++++-----------
 4 files changed, 101 insertions(+), 75 deletions(-)


base-commit: 5d86d542f68fda7ef6d543ac631b741db734101a
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ