lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <ZT8u2246+vkA/4F+@memverge.com>
Date:   Mon, 30 Oct 2023 00:19:39 -0400
From:   Gregory Price <gregory.price@...verge.com>
To:     "Huang, Ying" <ying.huang@...el.com>
Cc:     Gregory Price <gourry.memverge@...il.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, linux-cxl@...r.kernel.org,
        akpm@...ux-foundation.org, sthanneeru@...ron.com,
        "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
        Wei Xu <weixugc@...gle.com>,
        Alistair Popple <apopple@...dia.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Jonathan Cameron <Jonathan.Cameron@...wei.com>,
        Michal Hocko <mhocko@...nel.org>,
        Tim Chen <tim.c.chen@...el.com>, Yang Shi <shy828301@...il.com>
Subject: Re: [RFC PATCH v2 0/3] mm: mempolicy: Multi-tier weighted
 interleaving

On Mon, Oct 30, 2023 at 10:20:14AM +0800, Huang, Ying wrote:
> Gregory Price <gregory.price@...verge.com> writes:
> 
> The extending adds complexity to the kernel code and changes the kernel
> ABI.  So, IMHO, we need some real life use case to prove the added
> complexity is necessary.
> 
> For example, in [1], Johannes showed the use case to support to add
> per-memory-tier interleave weight.
> 
> [1] https://lore.kernel.org/all/20220607171949.85796-1-hannes@cmpxchg.org/
> 
> --
> Best Regards,
> Huang, Ying

Sorry, I misunderstood your question.

The use case is the same as the N:M interleave strategy between tiers,
and in fact the proposal for weights was directly inspired by the patch
you posted. We're searching for the best way to implement weights.

We've discussed placing these weights in:

1) mempolicy :
   https://lore.kernel.org/linux-cxl/20230914235457.482710-1-gregory.price@memverge.com/

2) tiers
   https://lore.kernel.org/linux-cxl/20231009204259.875232-1-gregory.price@memverge.com/

and now
3) the nodes themselves
   RFC not posted yet

The use case is the exact same as the patch you posted, which is to enable
optimal distribution of memory to maximize memory bandwidth usage.

The use case is straight forward - Consider a machine with the following
numa nodes:

1) Socket 0 - DRAM - ~400GB/s bandwidth local, less cross-socket
2) Socket 1 - DRAM - ~400GB/s bandwidth local, less cross socket
3) CXL Memory Attached to Socket 0 with ~64GB/s per link.
4) CXL Memory Attached to Socket 1 with ~64GB/s per link.

The goal is to enable mempolicy to implement weighted interleave such
that a thread running on socket 0 can effectively spread its memory
across each numa node (or some subset there-of) such that it maximizes
its bandwidth usage across the various devices.

For example, lets consider a system with only 1 & 2 (2 sockets w/ DRAM).

On an Intel System with UPI, the "effective" bandwidth available for a
task on Socket 0 is not 800GB/s, it's about 450-500GB/s split about
300/200 between the sockets (you never get the full amount, and UPI limits
cross-socket bandwidth).

Today `numactl --interleave` will split your memory 50:50 between
sockets, which is just blatantly suboptimal.  In this case you would
prefer a 3:2 distribution (literally weights of 3 and 2 respectively).

The extension to CXL becomes obvious then, as each individual node,
respective to its CPU placement, has a different optimal weight.


Of course the question becomes "what if a task uses more threads than a
single socket has to offer", and the answer there is essentially the
same as the answer today:  Then that process must become "numa-aware" to
make the best use of the available resources.

However, for software capable of exhausting bandwidth with from a single
socket (which on intel takes about 16-20 threads with certain access
patterns), then a weighted-interleave system provided via some interface
like `numactl --weighted-interleave` with weights either set in numa
nodes or mempolicy is sufficient.


~Gregory

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ