[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-t4_ymJsKEIz_r6@gourry-fedora-PF4VCD3F>
Date: Tue, 1 Apr 2025 01:26:23 -0400
From: Gregory Price <gourry@...rry.net>
To: Robert Richter <rrichter@....com>
Cc: Alison Schofield <alison.schofield@...el.com>,
Vishal Verma <vishal.l.verma@...el.com>,
Ira Weiny <ira.weiny@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Dave Jiang <dave.jiang@...el.com>,
Davidlohr Bueso <dave@...olabs.net>, linux-cxl@...r.kernel.org,
linux-kernel@...r.kernel.org,
"Fabio M. De Francesco" <fabio.m.de.francesco@...ux.intel.com>,
Terry Bowman <terry.bowman@....com>
Subject: Re: [PATCH v2 10/15] cxl/region: Use root decoders interleaving
parameters to create a region
On Mon, Mar 31, 2025 at 09:59:45PM -0400, Gregory Price wrote:
> I have discovered on my Zen5 that either this code is incorrect, or my
> decoders are programmed incorrectly.
>
> decoderN.M | ig iw
> ----------------------
> decoder0.0 | 2 256
> decoder3.0 | 1 256
> decoder6.0 | 1 256
> region0 | 2 512 <--- Wrong
>
> *Arch quirk aside*, everything except region is as expected.
>
... snip ...
>
> Looking at a normal system, we'd expect this configuration:
>
> decoderN.M | ig iw
> ----------------------
> decoder0.0 | 2 256
> decoder3.0 | 1 512
> decoder6.0 | 2 256
>
> The above code produces the following:
> [1,512]
> [2,1024] <--- still wrong
>
... snip ...
>
> Can we not just always report the parent ways/granularity, and skip all
> the math? We'll always iterate to the root, and that's what we want the
> region to match, right?
>
> Better yet, can we not just do this in the region creation code, rather
> than having the endpoint carry it through to the region for some reason?
> Avoid adding the duplicate ways/granularity field all together.
>
Having tested just using the root decoder data, i now get the expected
1:512, but i realized the issue is also that the regionref uses the
endpoint->decoder interleave ways/granularity
Before:
[]cxl region0: pci0000:d2:port1 cxl_port_setup_targets expected
iw: 1 ig: 1024 [... snip ...]
[]cxl region0: pci0000:d2:port1 cxl_port_setup_targets got
iw: 1 ig: 256 [... snip ...]
After:
[]cxl region0: pci0000:d2:port1 cxl_port_setup_targets expected
iw: 1 ig: 512
[]cxl region0: pci0000:d2:port1 cxl_port_setup_targets got
iw: 1 ig: 256
This makes sense, as the Zen5 quirk here is that the endpoints are
programmed with a 0-base for just their capacity, and they have no
interleave set on them - while the host bridges have 1:256 to match
the endpoint and 2:256 in the root is the only "correct" (in-spec)
programming of the topology.
I think the only choice here is to have another arch_check_interleave()
check here in `cxl_port_setup_targets()` that checks for this. I
haven't run the numbers on what the results would be if the HB had 1:512
instead of 1:256, but I imagine there lies another round of madness.
~Gregory
Powered by blists - more mailing lists