[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f68453b-206b-49a5-aae5-72a14ce65cab@nvidia.com>
Date: Mon, 15 Jan 2024 19:20:37 +0530
From: Vidya Sagar <vidyas@...dia.com>
To: Thomas Gleixner <tglx@...utronix.de>, bhelgaas@...gle.com,
rdunlap@...radead.org, ilpo.jarvinen@...ux.intel.com,
jiang.liu@...ux.intel.com
Cc: linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org,
treding@...dia.com, jonathanh@...dia.com, sdonthineni@...dia.com,
kthota@...dia.com, mmaddireddy@...dia.com, sagar.tv@...il.com
Subject: Re: [PATCH V3] PCI/MSI: Fix MSI hwirq truncation
On 1/15/2024 3:31 PM, Thomas Gleixner wrote:
> External email: Use caution opening links or attachments
>
>
> On Fri, Jan 12 2024 at 23:03, Vidya Sagar wrote:
>> On 1/12/2024 9:23 PM, Thomas Gleixner wrote:
>>> On Thu, Jan 11 2024 at 10:58, Vidya Sagar wrote:
>>>> So, cast the PCI domain number to 'irq_hw_number_t' before left shifting
>>>> it to calculate hwirq number.
>>>
>>> This still does not explain that this fixes it only on 64-bit platforms
>>> and why we don't care for 32-bit systems.
>> Agree that this fixes the issue only on 64-bit platforms. It doesn't
>> change the behavior on 32-bit platforms. My understanding is that the
>> issue surfaces only if there are too many PCIe controllers in the system
>> which usually is the case in modern server systems and it is arguable if
>> the server systems really run 32-bit kernels.
>
> Arguably people who do that can keep the pieces.
>
>> One way to fix it for both 32-bit and 64-bit systems is by changing the
>> type of 'hwirq' to u64. This may cause two memory reads in 32-bit
>> systems whenever 'hwirq' is accessed and that may intern cause some perf
>> impact?? Is this the way you think I should be handling it?
>
> No. Leave it as is. What I'm asking for is that it's properly documented
> in the changelog.
Sure. I'll add this extra information in the change log.
>
> Thanks,
>
> tglx
>
Powered by blists - more mailing lists