lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231102141359.00000aa6@Huawei.com>
Date:   Thu, 2 Nov 2023 14:13:59 +0000
From:   Jonathan Cameron <Jonathan.Cameron@...wei.com>
To:     Ravi Jonnalagadda <ravis.opensrc@...ron.com>
CC:     <ying.huang@...el.com>, <akpm@...ux-foundation.org>,
        <aneesh.kumar@...ux.ibm.com>, <apopple@...dia.com>,
        <dave.hansen@...el.com>, <gourry.memverge@...il.com>,
        <gregkh@...uxfoundation.org>, <gregory.price@...verge.com>,
        <hannes@...xchg.org>, <linux-cxl@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
        <mhocko@...e.com>, <rafael@...nel.org>, <shy828301@...il.com>,
        <tim.c.chen@...el.com>, <weixugc@...gle.com>
Subject: Re: [RFC PATCH v3 0/4] Node Weights and Weighted Interleave

icable.  
> >
> >You mean the different memory ranges of a NUMA node may have different
> >performance?  I don't think that we can deal with this.  
> 
> Example Configuration: On a server that we are using now, four different
> CXL cards are combined to form a single NUMA node and two other cards are
> exposed as two individual numa nodes.
> So if we have the ability to combine multiple CXL memory ranges to a
> single NUMA node the number of NUMA nodes in the system would potentially
> decrease even if we can't combine the entire range to form a single node.
>

If it's in control of the kernel, today for CXL NUMA nodes are defined by
CXL Fixed Memory Windows rather than the individual characteristics of devices
that might be accessed from those windows.

That's a useful simplification to get things going and it's not clear how the
QoS aspects of CFMWS will be used.  So will we always have enough windows with
fine enough granularity coming from the _DSM QTG magic that they don't end up
with different performance devices (or topologies) within each one?

No idea.  It's a bunch of trade offs of where the complexity lies and how much
memory is being provided over CXL vs physical address space exhaustion.
 
Long term, my guess is we'll need to support something more sophisticated with
dynamic 'creation' of NUMA  nodes (or something that looks like that anyway)
so we can always have a separate one for each significantly different set of
memory access characteristics.  If they are coming from ACPI that's already
required by the specification.  This space is going to continue getting more
complex.

Upshot is that I wouldn't focus too much on possibility of a NUMA node having
devices with very different memory access characterstics in it.  That's a quirk
of today's world that we can and should look to fix.

If your bios is setting this up for you and presenting them in SRAT / HMAT etc
then it's not complying with the ACPI spec.

Jonathan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ