[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a12f45f2-62eb-44fd-b7d1-675be1f49ca5@nvidia.com>
Date: Thu, 25 Sep 2025 14:30:08 +0300
From: Arto Merilainen <amerilainen@...dia.com>
To: Xu Yilun <yilun.xu@...ux.intel.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...nel.org>
Cc: dan.j.williams@...el.com, linux-kernel@...r.kernel.org,
bhelgaas@...gle.com, aik@....com, lukas@...ner.de,
Samuel Ortiz <sameo@...osinc.com>, linux-pci@...r.kernel.org,
linux-coco@...ts.linux.dev
Subject: Re: [PATCH v4 07/10] PCI/IDE: Add IDE establishment helpers
On 25.9.2025 13.18, Xu Yilun wrote:
>> This is the change I am adding
>>
>> [...]
>>
>> +static int add_range_merge_overlap(struct range *range, int az, int nr_range,
>> + u64 start, u64 end)
>> +{
>> + int i;
>> +
>> + if (start >= end)
>> + return nr_range;
>> +
>> + /* get new start/end: */
>> + for (i = 0; i < nr_range; i++) {
>> +
>> + if (!range[i].end)
>> + continue;
>> +
>> + /* Try to add to the end */
>> + if (range[i].end + 1 == start) {
>> + range[i].end = end;
>> + return nr_range;
>> + }
>> +
>> + /* Try to add to the start */
>> + if (range[i].start == end + 1) {
>> + range[i].start = start;
>> + return nr_range;
>> + }
>> + }
>> +
>> + /* Need to add it: */
>> + return add_range(range, az, nr_range, start, end);
>> +}
Should we use bus addresses while programming the ranges? The 32-bit
ranges may be remapped (i.e. the range address doesn't match with the
bus address).
>
> I noticed Arto has a good idea that there needs at most 2 blocks no
> matter how the mmio layout is for PF/VF/MFD..., one for 32 bit, one for
> 64 bit. And the direct connected upstream bridge to the DSM device has
> already aggregated the 2 ranges on enumeration. [1] That greatly reduces
> the complexity. No need for callers to iterate the devices/resources to
> collect the ranges again.
>
I agree that the implementation feels unnecessarily complex given that
the aggregated ranges are already available.
For reference, I am using a routine like this for collecting the ranges
before programming the address association registers:
static int pci_res_to_ide_addr(struct pci_dev *pdev,
struct ide_addr_range *ide_addr)
{
struct pci_dev *bridge = pci_upstream_bridge(pdev);
struct pci_bus_region region;
struct resource *res;
int naddr = 0;
res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW];
if (res->flags & IORESOURCE_MEM) {
pcibios_resource_to_bus(bridge->bus, ®ion, res);
ide_addr[naddr].start = region.start;
ide_addr[naddr].end = region.end;
naddr++;
}
res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW];
if (res->flags & IORESOURCE_PREFETCH) {
pcibios_resource_to_bus(bridge->bus, ®ion, res);
ide_addr[naddr].start = region.start;
ide_addr[naddr].end = region.end;
naddr++;
}
return naddr;
}
> For TDX, the firmware enforces to setup only one addr block for RP, no
> matter how many supported blocks the RP actually has. That means TDX
> could only support 64 bit IDE ranges. I'd like to require an input
> parameter like "max_nr_mem_rp" for this purpose.
I do not think having only a limited number of addr blocks is unique to
TDX. Given that the SW needs only a couple of addr blocks when
programmed using the information in the type-1 header of the upstream
bridge, I would expect hardware implementations to not have more blocks
than necessary.
- R2
Powered by blists - more mailing lists