[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d752b9b3-c032-41c5-b10f-48b711a54eee@intel.com>
Date: Mon, 3 Nov 2025 18:00:08 -0700
From: Dave Jiang <dave.jiang@...el.com>
To: Robert Richter <rrichter@....com>,
Alison Schofield <alison.schofield@...el.com>,
Vishal Verma <vishal.l.verma@...el.com>, Ira Weiny <ira.weiny@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Davidlohr Bueso <dave@...olabs.net>
Cc: linux-cxl@...r.kernel.org, linux-kernel@...r.kernel.org,
Gregory Price <gourry@...rry.net>,
"Fabio M. De Francesco" <fabio.m.de.francesco@...ux.intel.com>,
Terry Bowman <terry.bowman@....com>, Joshua Hahn <joshua.hahnjy@...il.com>
Subject: Re: [PATCH v4 10/14] cxl: Enable AMD Zen5 address translation using
ACPI PRMT
On 11/3/25 11:47 AM, Robert Richter wrote:
> Add AMD Zen5 support for address translation.
>
> Zen5 systems may be configured to use 'Normalized addresses'. Then,
> host physical addresses (HPA) are different from their system physical
> addresses (SPA). The endpoint has its own physical address space and
> an incoming HPA is already converted to the device's physical address
> (DPA). Thus it has interleaving disabled and CXL endpoints are
> programmed passthrough (DPA == HPA).
>
> Host Physical Addresses (HPAs) need to be translated from the endpoint
> to its CXL host bridge, esp. to identify the endpoint's root decoder
> and region's address range. ACPI Platform Runtime Mechanism (PRM)
> provides a handler to translate the DPA to its SPA. This is documented
> in:
>
> AMD Family 1Ah Models 00h–0Fh and Models 10h–1Fh
> ACPI v6.5 Porting Guide, Publication # 58088
> https://www.amd.com/en/search/documentation/hub.html
>
> With Normalized Addressing this PRM handler must be used to translate
> an HPA of an endpoint to its SPA.
>
> Do the following to implement AMD Zen5 address translation:
>
> Introduce a new file core/atl.c to handle ACPI PRM specific address
> translation code. Naming is loosely related to the kernel's AMD
> Address Translation Library (CONFIG_AMD_ATL) but implementation does
> not depend on it, nor it is vendor specific. Use Kbuild and Kconfig
> options respectively to enable the code depending on architecture and
> platform options.
>
> AMD Zen5 systems support the ACPI PRM CXL Address Translation firmware
> call (see ACPI v6.5 Porting Guide, Address Translation - CXL DPA to
> System Physical Address). Firmware enables the PRM handler if the
> platform has address translation implemented. Check firmware and
> kernel support of ACPI PRM using the specific GUID. On success enable
> address translation by setting up the earlier introduced root port
> callback, see function cxl_prm_translate_hpa_range(). Setup is done in
> cxl_setup_prm_address_translation(), it is the only function that
> needs to be exported. For low level PRM firmware calls, use the ACPI
> framework.
>
> Identify the region's interleaving ways by inspecting the address
> ranges. Also determine the interleaving granularity using the address
> translation callback. Note that the position of the chunk from one
> interleaving block to the next may vary and thus cannot be considered
> constant. Address offsets larger than the interleaving block size
> cannot be used to calculate the granularity. Thus, probe the
> granularity using address translation for various HPAs in the same
> interleaving block.
>
> Signed-off-by: Robert Richter <rrichter@....com>
Just a small thing below, otherwise:
Reviewed-by: Dave Jiang <dave.jiang@...el.com>> ---
> drivers/cxl/Kconfig | 4 +
> drivers/cxl/acpi.c | 2 +
> drivers/cxl/core/Makefile | 1 +
> drivers/cxl/core/atl.c | 195 ++++++++++++++++++++++++++++++++++++++
> drivers/cxl/cxl.h | 7 ++
> 5 files changed, 209 insertions(+)
> create mode 100644 drivers/cxl/core/atl.c
>
> diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> index 48b7314afdb8..e599badba69b 100644
> --- a/drivers/cxl/Kconfig
> +++ b/drivers/cxl/Kconfig
> @@ -233,4 +233,8 @@ config CXL_MCE
> def_bool y
> depends on X86_MCE && MEMORY_FAILURE
>
> +config CXL_ATL
> + def_bool y
> + depends on ACPI_PRMT && AMD_NB
> +
> endif
> diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
> index a54d56376787..f9bbc77f3ec2 100644
> --- a/drivers/cxl/acpi.c
> +++ b/drivers/cxl/acpi.c
> @@ -916,6 +916,8 @@ static int cxl_acpi_probe(struct platform_device *pdev)
> cxl_root->ops.qos_class = cxl_acpi_qos_class;
> root_port = &cxl_root->port;
>
> + cxl_setup_prm_address_translation(cxl_root);
> +
> rc = bus_for_each_dev(adev->dev.bus, NULL, root_port,
> add_host_bridge_dport);
> if (rc < 0)
> diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
> index 5ad8fef210b5..11fe272a6e29 100644
> --- a/drivers/cxl/core/Makefile
> +++ b/drivers/cxl/core/Makefile
> @@ -20,3 +20,4 @@ cxl_core-$(CONFIG_CXL_REGION) += region.o
> cxl_core-$(CONFIG_CXL_MCE) += mce.o
> cxl_core-$(CONFIG_CXL_FEATURES) += features.o
> cxl_core-$(CONFIG_CXL_EDAC_MEM_FEATURES) += edac.o
> +cxl_core-$(CONFIG_CXL_ATL) += atl.o
> diff --git a/drivers/cxl/core/atl.c b/drivers/cxl/core/atl.c
> new file mode 100644
> index 000000000000..d6aa7e6d0ac5
> --- /dev/null
> +++ b/drivers/cxl/core/atl.c
> @@ -0,0 +1,195 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2025 Advanced Micro Devices, Inc.
> + */
> +
> +#include <linux/prmt.h>
> +#include <linux/pci.h>
> +#include <linux/acpi.h>
> +
> +#include <cxlmem.h>
> +#include "core.h"
> +
> +/*
> + * PRM Address Translation - CXL DPA to System Physical Address
> + *
> + * Reference:
> + *
> + * AMD Family 1Ah Models 00h–0Fh and Models 10h–1Fh
> + * ACPI v6.5 Porting Guide, Publication # 58088
> + */
> +
> +static const guid_t prm_cxl_dpa_spa_guid =
> + GUID_INIT(0xee41b397, 0x25d4, 0x452c, 0xad, 0x54, 0x48, 0xc6, 0xe3,
> + 0x48, 0x0b, 0x94);
> +
> +struct prm_cxl_dpa_spa_data {
> + u64 dpa;
> + u8 reserved;
> + u8 devfn;
> + u8 bus;
> + u8 segment;
> + u64 *spa;
> +} __packed;
> +
> +static u64 prm_cxl_dpa_spa(struct pci_dev *pci_dev, u64 dpa)
> +{
> + struct prm_cxl_dpa_spa_data data;
> + u64 spa;
> + int rc;
> +
> + data = (struct prm_cxl_dpa_spa_data) {
> + .dpa = dpa,
> + .devfn = pci_dev->devfn,
> + .bus = pci_dev->bus->number,
> + .segment = pci_domain_nr(pci_dev->bus),
> + .spa = &spa,
> + };
> +
> + rc = acpi_call_prm_handler(prm_cxl_dpa_spa_guid, &data);
> + if (rc) {
> + pci_dbg(pci_dev, "failed to get SPA for %#llx: %d\n", dpa, rc);
> + return ULLONG_MAX;
> + }
> +
> + pci_dbg(pci_dev, "PRM address translation: DPA -> SPA: %#llx -> %#llx\n", dpa, spa);
> +
> + return spa;
> +}
> +
> +static int cxl_prm_translate_hpa_range(struct cxl_root *cxl_root, void *data)
> +{
> + struct cxl_region_context *ctx = data;
> + struct cxl_endpoint_decoder *cxled = ctx->cxled;
> + struct cxl_decoder *cxld = &cxled->cxld;
> + struct cxl_memdev *cxlmd = ctx->cxlmd;
> + struct range hpa_range = ctx->hpa_range;
> + struct pci_dev *pci_dev;
> + u64 spa_len, len = range_len(&hpa_range);
> + u64 addr, base_spa, base = hpa_range.start;
> + int ways, gran;
> +
> + /*
> + * When Normalized Addressing is enabled, the endpoint
> + * maintains a 1:1 mapping between HPA and DPA. If disabled,
> + * skip address translation and perform only a range check.
> + */
> + if (hpa_range.start != cxled->dpa_res->start)
> + return 0;
> +
> + if (!IS_ALIGNED(hpa_range.start, SZ_256M) ||
> + !IS_ALIGNED(hpa_range.end + 1, SZ_256M)) {
> + dev_dbg(cxld->dev.parent,
> + "CXL address translation: Unaligned decoder HPA range: %#llx-%#llx(%s)\n",
> + hpa_range.start, hpa_range.end, dev_name(&cxld->dev));
> + return -ENXIO;
> + }
> +
> + /*
> + * Endpoints are programmed passthrough in Normalized
> + * Addressing mode.
> + */
> + if (ctx->interleave_ways != 1) {
> + dev_dbg(&cxld->dev, "unexpected interleaving config: ways: %d granularity: %d\n",
> + ctx->interleave_ways, ctx->interleave_granularity);
> + return -ENXIO;
> + }
> +
> + if (!cxlmd || !dev_is_pci(cxlmd->dev.parent)) {
> + dev_dbg(&cxld->dev, "No endpoint found: %s, range %#llx-%#llx\n",
> + dev_name(cxld->dev.parent), hpa_range.start,
> + hpa_range.end);
> + return -ENXIO;
> + }
> +
> + pci_dev = to_pci_dev(cxlmd->dev.parent);
> +
> + /* Translate HPA range to SPA. */
> + hpa_range.start = base_spa = prm_cxl_dpa_spa(pci_dev, hpa_range.start);
> + hpa_range.end = prm_cxl_dpa_spa(pci_dev, hpa_range.end);
> +
> + if (hpa_range.start == ULLONG_MAX || hpa_range.end == ULLONG_MAX) {
> + dev_dbg(cxld->dev.parent,
> + "CXL address translation: Failed to translate HPA range: %#llx-%#llx:%#llx-%#llx(%s)\n",
> + hpa_range.start, hpa_range.end, ctx->hpa_range.start,
> + ctx->hpa_range.end, dev_name(&cxld->dev));
> + return -ENXIO;
> + }
> +
> + /*
> + * Since translated addresses include the interleaving
> + * offsets, align the range to 256 MB.
> + */
> + hpa_range.start = ALIGN_DOWN(hpa_range.start, SZ_256M);
> + hpa_range.end = ALIGN(hpa_range.end, SZ_256M) - 1;
> +
> + spa_len = range_len(&hpa_range);
> + if (!len || !spa_len || spa_len % len) {
> + dev_dbg(cxld->dev.parent,
> + "CXL address translation: HPA range not contiguous: %#llx-%#llx:%#llx-%#llx(%s)\n",
> + hpa_range.start, hpa_range.end, ctx->hpa_range.start,
> + ctx->hpa_range.end, dev_name(&cxld->dev));
> + return -ENXIO;
> + }
> +
> + ways = spa_len / len;
> + gran = SZ_256;
maybe init 'base' and 'base_hpa' here. Makes it easier to recall rather than having to go up to recall what it was.> +
> + /*
> + * Determine interleave granularity
> + *
> + * Note: The position of the chunk from one interleaving block
> + * to the next may vary and thus cannot be considered
> + * constant. Address offsets larger than the interleaving
> + * block size cannot be used to calculate the granularity.
> + */
> + while (ways > 1 && gran <= SZ_16M) {
> + addr = prm_cxl_dpa_spa(pci_dev, base + gran);
> + if (addr != base_spa + gran)
> + break;
> + gran <<= 1;
> + }
> +
> + if (gran > SZ_16M) {
> + dev_dbg(cxld->dev.parent,
> + "CXL address translation: Cannot determine granularity: %#llx-%#llx:%#llx-%#llx(%s)\n",
> + hpa_range.start, hpa_range.end, ctx->hpa_range.start,
> + ctx->hpa_range.end, dev_name(&cxld->dev));
> + return -ENXIO;
> + }
> +
> + ctx->hpa_range = hpa_range;
> + ctx->interleave_ways = ways;
> + ctx->interleave_granularity = gran;
> +
> + dev_dbg(&cxld->dev,
> + "address mapping found for %s (hpa -> spa): %#llx+%#llx -> %#llx+%#llx ways:%d granularity:%d\n",
> + dev_name(ctx->cxlmd->dev.parent), base, len, hpa_range.start,
> + spa_len, ways, gran);
> +
> + return 0;
> +}
> +
> +void cxl_setup_prm_address_translation(struct cxl_root *cxl_root)
> +{
> + struct device *host = cxl_root->port.uport_dev;
> + u64 spa;
> + struct prm_cxl_dpa_spa_data data = { .spa = &spa, };
> + int rc;
> +
> + /*
> + * Applies only to PCIe Host Bridges which are children of the
> + * CXL Root Device (HID=“ACPI0017”). Check this and drop
> + * cxl_test instances.
> + */
> + if (!acpi_match_device(host->driver->acpi_match_table, host))
> + return;
> +
> + /* Check kernel (-EOPNOTSUPP) and firmware support (-ENODEV) */
> + rc = acpi_call_prm_handler(prm_cxl_dpa_spa_guid, &data);
> + if (rc == -EOPNOTSUPP || rc == -ENODEV)
> + return;
> +
> + cxl_root->ops.translate_hpa_range = cxl_prm_translate_hpa_range;
> +}
> +EXPORT_SYMBOL_NS_GPL(cxl_setup_prm_address_translation, "CXL");
> diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> index 94b9fcc07469..0af46d1b0abc 100644
> --- a/drivers/cxl/cxl.h
> +++ b/drivers/cxl/cxl.h
> @@ -790,6 +790,13 @@ static inline void cxl_dport_init_ras_reporting(struct cxl_dport *dport,
> struct device *host) { }
> #endif
>
> +#ifdef CONFIG_CXL_ATL
> +void cxl_setup_prm_address_translation(struct cxl_root *cxl_root);
> +#else
> +static inline
> +void cxl_setup_prm_address_translation(struct cxl_root *cxl_root) {}
> +#endif
> +
> struct cxl_decoder *to_cxl_decoder(struct device *dev);
> struct cxl_root_decoder *to_cxl_root_decoder(struct device *dev);
> struct cxl_switch_decoder *to_cxl_switch_decoder(struct device *dev);
Powered by blists - more mailing lists