[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <b19a25bd-7fa1-4220-b8d6-919399a9e7d7@app.fastmail.com>
Date: Wed, 01 Mar 2023 12:57:43 +0100
From: "Arnd Bergmann" <arnd@...db.de>
To: "Manivannan Sadhasivam" <manivannan.sadhasivam@...aro.org>
Cc: "Bjorn Andersson" <andersson@...nel.org>,
"Konrad Dybcio" <konrad.dybcio@...aro.org>,
"Rob Herring" <robh+dt@...nel.org>,
krzysztof.kozlowski+dt@...aro.org, linux-arm-msm@...r.kernel.org,
devicetree@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 00/16] Qcom: Fix PCI I/O range defined in devicetree
On Wed, Mar 1, 2023, at 12:29, Manivannan Sadhasivam wrote:
> On Tue, Feb 28, 2023 at 05:58:37PM +0100, Arnd Bergmann wrote:
>> On Tue, Feb 28, 2023, at 17:47, Manivannan Sadhasivam wrote:
>> > Hi,
>> >
>> > This series fixes the issue with PCI I/O ranges defined in devicetree of
>> > Qualcomm SoCs as reported by Arnd [1]. Most of the Qualcomm SoCs define
>> > identical mapping for the PCI I/O range. But the PCI device I/O ports
>> > are usually located between 0x0 to 64KiB/1MiB. So the defined PCI addresses are
>> > mostly bogus. The lack of bug report on this issue indicates that no one really
>> > tested legacy PCI devices with these SoCs.
>> >
>> > This series also contains a couple of cleanup patches that aligns the entries of
>> > ranges property.
>>
>> Looks good to me. I already commented that we may also want to use
>> 64KB everywhere instead of 1MB for the per-host window size.
>
> I also spotted this discrepancy while working on this series, but the size
> seems to be not universal across SoCs from many vendors. So I settled with
> whatever range that was used before.
Makes sense. We could add another patch if necessary of course, and
it probably doesn't matter much. OTOH I don't think there is anything
SoC specific in this and we used to just truncate this to 64KB per
domain. It's only really a problem if the total size of the I/O ports
for all domains in a system exceeds the 16MB of virtual memory area.
Arnd
Powered by blists - more mailing lists