[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r0owy95t.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Tue, 25 Jul 2023 11:14:38 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Alistair Popple <apopple@...dia.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <linux-cxl@...r.kernel.org>,
<nvdimm@...ts.linux.dev>, <linux-acpi@...r.kernel.org>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Wei Xu <weixugc@...gle.com>,
Dan Williams <dan.j.williams@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
"Davidlohr Bueso" <dave@...olabs.net>,
Johannes Weiner <hannes@...xchg.org>,
"Jonathan Cameron" <Jonathan.Cameron@...wei.com>,
Michal Hocko <mhocko@...nel.org>,
Yang Shi <shy828301@...il.com>,
Rafael J Wysocki <rafael.j.wysocki@...el.com>,
Dave Jiang <dave.jiang@...el.com>
Subject: Re: [PATCH RESEND 1/4] memory tiering: add abstract distance
calculation algorithms management
Hi, Alistair,
Thanks a lot for comments!
Alistair Popple <apopple@...dia.com> writes:
> Huang Ying <ying.huang@...el.com> writes:
>
>> The abstract distance may be calculated by various drivers, such as
>> ACPI HMAT, CXL CDAT, etc. While it may be used by various code which
>> hot-add memory node, such as dax/kmem etc. To decouple the algorithm
>> users and the providers, the abstract distance calculation algorithms
>> management mechanism is implemented in this patch. It provides
>> interface for the providers to register the implementation, and
>> interface for the users.
>
> I wonder if we need this level of decoupling though? It seems to me like
> it would be simpler and better for drivers to calculate the abstract
> distance directly themselves by calling the desired algorithm (eg. ACPI
> HMAT) and pass this when creating the nodes rather than having a
> notifier chain.
Per my understanding, ACPI HMAT and memory device drivers (such as
dax/kmem) may belong to different subsystems (ACPI vs. dax). It's not
good to call functions across subsystems directly. So, I think it's
better to use a general subsystem: memory-tier.c to decouple them. If
it turns out that a notifier chain is unnecessary, we can use some
function pointers instead.
> At the moment it seems we've only identified two possible algorithms
> (ACPI HMAT and CXL CDAT) and I don't think it would make sense for one
> of those to fallback to the other based on priority, so why not just
> have drivers call the correct algorithm directly?
For example, we have a system with PMEM (persistent memory, Optane
DCPMM, or AEP, or something else) in DIMM slots and CXL.mem connected
via CXL link to a remote memory pool. We will need ACPI HMAT for PMEM
and CXL CDAT for CXL.mem. One way is to make dax/kmem identify the
types of the device and call corresponding algorithms. The other way
(suggested by this series) is to make dax/kmem call a notifier chain,
then CXL CDAT or ACPI HMAT can identify the type of device and calculate
the distance if the type is correct for them. I don't think that it's
good to make dax/kem to know every possible types of memory devices.
>> Multiple algorithm implementations can cooperate via calculating
>> abstract distance for different memory nodes. The preference of
>> algorithm implementations can be specified via
>> priority (notifier_block.priority).
>
> How/what decides the priority though? That seems like something better
> decided by a device driver than the algorithm driver IMHO.
Do we need the memory device driver specific priority? Or we just share
a common priority? For example, the priority of CXL CDAT is always
higher than that of ACPI HMAT? Or architecture specific?
And, I don't think that we are forced to use the general notifier chain
interface in all memory device drivers. If the memory device driver has
better understanding of the memory device, it can use other way to
determine abstract distance. For example, a CXL memory device driver
can identify abstract distance by itself. While other memory device drivers
can use the general notifier chain interface at the same time.
--
Best Regards,
Huang, Ying
Powered by blists - more mailing lists