lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aBuMet_S1ONS1pOT@gourry-fedora-PF4VCD3F>
Date: Wed, 7 May 2025 12:38:18 -0400
From: Gregory Price <gourry@...rry.net>
To: rakie.kim@...com
Cc: joshua.hahnjy@...il.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, linux-cxl@...r.kernel.org,
	dan.j.williams@...el.com, ying.huang@...ux.alibaba.com,
	kernel_team@...ynix.com, honggyu.kim@...com, yunjeong.mun@...com
Subject: Re: [RFC] Add per-socket weight support for multi-socket systems in
 weighted interleave

On Wed, May 07, 2025 at 06:35:16PM +0900, rakie.kim@...com wrote:
> Hi Gregory, Joshua,
> 
> I hope this message finds you well. I'm writing to discuss a feature I
> believe would enhance the flexibility of the weighted interleave policy:
> support for per-socket weighting in multi-socket systems.
> 
> ---
> 
> <Background and prior design context>
> 
> While reviewing the early versions of the weighted interleave patches,
> I noticed that a source-aware weighting structure was included in v1:
> 
>   https://lore.kernel.org/all/20231207002759.51418-1-gregory.price@memverge.com/
> 
> However, this structure was removed in a later version:
> 
>   https://lore.kernel.org/all/20231209065931.3458-1-gregory.price@memverge.com/
> 
> Unfortunately, I was unable to participate in the discussion at that
> time, and I sincerely apologize for missing it.
> 
> From what I understand, there may have been valid reasons for removing
> the source-relative design, including:
> 
> 1. Increased complexity in mempolicy internals. Adding source awareness
>    introduces challenges around dynamic nodemask changes, task policy
>    sharing during fork(), mbind(), rebind(), etc.
> 
> 2. A lack of concrete, motivating use cases. At that stage, it might
>    have been more pragmatic to focus on a 1D flat weight array.
> 
> If there were additional reasons, I would be grateful to learn them.
>

x. task local weights would have required additional syscalls, and there
   was insufficient active users to warrant the extra complexity.

y. numa interfaces don't capture cross-socket interconnect information,
   and as a result actually hides "True" bandwidth values from the
   perspective of a given socket.

As a result, mempolicy just isn't well positioned to deal with this
as-designed, and introducing the per-task weights w/ the additional
extensions just was a bridge too far.  Global weights are sufficient
if you combine cpusets/core-pinning and a nodemask that excludes
cross-socket nodes (i.e.: Don't use cross-socket memory).

For workloads that do scale up to use both sockets and both devices,
you either want to spread it out according to global weights or use
region-specific (mbind) weighted interleave anyway.

> ---
> 
> Scenario 1: Adapt weighting based on the task's execution node
> 
> Many applications can achieve reasonable performance just by using the
> CXL memory on their local socket. However, most workloads do not pin
> tasks to a specific CPU node, and the current implementation does not
> adjust weights based on where the task is running.
> 

"Most workloads don't..." - but they can, and fairly cleanly via
cgroups/cpusets.

> If per-source-node weighting were available, the following matrix could
> be used:
> 
>          0     1     2     3
>      0   3     0     1     0
>      1   0     3     0     1
>
> This flexibility is currently not possible with a single flat weight
> array.

This can be done with a mempolicy that omits undesired nodes from the
nodemask - without requiring any changes.

> 
> Scenario 2: Reflect relative memory access performance
> 
> Remote memory access (e.g., from node0 to node3) incurs a real bandwidth
> penalty. Ideally, weights should reflect this. For example:
> 
> Bandwidth-based matrix:
> 
>          0     1     2     3
>      0   6     3     2     1
>      1   3     6     1     2
> 
> Or DRAM + local CXL only:
> 
>          0     1     2     3
>      0   6     0     2     1
>      1   0     6     1     2
> 
> While scenario 1 is probably more common in practice, both can be
> expressed within the same design if per-socket weights are supported.
> 

The core issue here is actually that NUMA doesn't have a good way to
represent the cross-socket interconnect bandwidth - and the fact that it
abstracts all devices behind it (both DRAM and CXL).

So reasoning about this problem in terms of NUMA is trying to fit a
square peg in a round hole.  I think it's the wrong tool - maybe we need
a new one.  I don't know what this looks like.

> ---
> 
> <Proposed approach>
> 
> Instead of removing the current sysfs interface or flat weight logic, I
> propose introducing an optional "multi" mode for per-socket weights.
> This would allow users to opt into source-aware behavior.
> (The name 'multi' is just an example and should be changed to a more
> appropriate name in the future.)
> 
> Draft sysfs layout:
> 
>   /sys/kernel/mm/mempolicy/weighted_interleave/
>     +-- multi         (bool: enable per-socket mode)
>     +-- node0         (flat weight for legacy/default mode)
>     +-- node_groups/
>         +-- node0_group/
>         |   +-- node0  (weight of node0 when running on node0)
>         |   +-- node1
>         +-- node1_group/
>             +-- node0
>             +-- node1
> 

This is starting to look like memory-tiers.c, which is largely useless
at the moment.  Maybe we implement such logic in memory-tiers, and then 
extend mempolicy to have a MPOL_MEMORY_TIER or MPOL_F_MEMORY_TIER?

That would give us better flexibility to design the mempolicy interface
without having to be bound by the NUMA infrastructure it presently
depends on.  We can figure out how to collect cross-socket interconnect
information in memory-tiers, and see what issues we'll have with
engaging that information from the mempolicy/page allocator path.

You'll see in very very early versions of weighted interleave I
originally implemented it via memory-tiers.  You might look there for
inspiration.

> <Additional implementation considerations>
> 
> 1. Compatibility: The proposal avoids breaking the current interface or
>    behavior and remains backward-compatible.
> 
> 2. Auto-tuning: Scenario 1 (local CXL + DRAM) likely works with minimal
>    change. Scenario 2 (bandwidth-aware tuning) would require more
>    development, and I would welcome Joshua's input on this.
> 
> 3. Zero weights: Currently the minimum weight is 1. We may want to allow
>    zero to fully support asymmetric exclusion.
>

I think we need to explore different changes here - it's become fairly
clear when discussing tiering at LSFMM that NUMA is a dated abstraction
that is showing its limits here.  Lets ask what information we want and
how to structure/interact with it first, before designing the sysfs
interface for it.

~Gregory

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ