lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87o7gzm22n.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date:   Mon, 16 Oct 2023 15:57:52 +0800
From:   "Huang, Ying" <ying.huang@...el.com>
To:     Gregory Price <gourry.memverge@...il.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        linux-cxl@...r.kernel.org, akpm@...ux-foundation.org,
        sthanneeru@...ron.com, gregory.price@...verge.com
Subject: Re: [RFC PATCH v2 0/3] mm: mempolicy: Multi-tier weighted interleaving

Gregory Price <gourry.memverge@...il.com> writes:

> v2: change memtier mutex to semaphore
>     add source-node relative weighting
>     add remaining mempolicy integration code
>
> = v2 Notes
>
> Developed in colaboration with original authors to deconflict
> similar efforts to extend mempolicy to take weights directly.
>
> == Mutex to Semaphore change:
>
> The memory tiering subsystem is extended in this patch set to have
> externally available information (weights), and therefore additional
> controls need to be added to ensure values are not changed (or tiers
> changed/added/removed) during various calculations.
>
> Since it is expected that many threads will be accessing this data
> during allocations, a mutex is not appropriate.

IIUC, this is a change for performance.  If so, please show some
performance data.

> Since write-updates (weight changes, hotplug events) are rare events,
> a simple rw semaphore is sufficient.
>
> == Source-node relative weighting:
>
> Tiers can now be weighted differently based on the node requesting
> the weight.  For example CPU-Nodes 0 and 1 may have different weights
> for the same CXL memory tier, because topologically the number of
> NUMA hops is greater (or any other physical topological difference
> resulting in different effective latency or bandwidth values)
>
> 1. Set weights for DDR (tier4) and CXL(teir22) tiers.
>    echo source_node:weight > /path/to/interleave_weight

If source_node is considered, why not consider target_node too?  On a
system with only 1 tier (DRAM), do you want weighted interleaving among
NUMA nodes?  If so, why tie weighted interleaving with memory tiers?
Why not just introduce weighted interleaving for NUMA nodes?

> # Set tier4 weight from node 0 to 85
> echo 0:85 > /sys/devices/virtual/memory_tiering/memory_tier4/interleave_weight
> # Set tier4 weight from node 1 to 65
> echo 1:65 > /sys/devices/virtual/memory_tiering/memory_tier4/interleave_weight
> # Set tier22 weight from node 0 to 15
> echo 0:15 > /sys/devices/virtual/memory_tiering/memory_tier22/interleave_weight
> # Set tier22 weight from node 1 to 10
> echo 1:10 > /sys/devices/virtual/memory_tiering/memory_tier22/interleave_weight

--
Best Regards,
Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ