[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <847e4b85-f081-e214-dc24-75a7b3c2c885@huawei.com>
Date: Sat, 15 Apr 2023 09:38:28 +0800
From: Yipeng Zou <zouyipeng@...wei.com>
To: Marc Zyngier <maz@...nel.org>
CC: <tglx@...utronix.de>, <samuel@...lland.org>,
<oleksandr_tyshchenko@...m.com>, <andy.shevchenko@...il.com>,
<apatel@...tanamicro.com>, <lvjianmin@...ngson.cn>,
<linux-kernel@...r.kernel.org>, <chris.zjh@...wei.com>,
<liaochang1@...wei.com>, James Gowans <jgowans@...zon.com>
Subject: Re: [RFC PATCH] genirq: introduce handle_fasteoi_edge_irq flow
handler
在 2023/4/14 19:25, Marc Zyngier 写道:
> On Fri, 10 Mar 2023 10:14:17 +0000,
> Yipeng Zou <zouyipeng@...wei.com> wrote:
>> Recently, We have a LPI migration issue on the ARM SMP platform.
>>
>> For example, NIC device generates MSI and sends LPI to CPU0 via ITS,
>> meanwhile irqbalance running on CPU1 set irq affinity of NIC to CPU1,
>> the next interrupt will be sent to CPU2, due to the state of irq is
>> still in progress, kernel does not end up performing irq handler on
>> CPU2, which results in some userland service timeouts, the sequence
>> of events is shown as follows:
>>
>> NIC CPU0 CPU1
>>
>> Generate IRQ#1 READ_IAR
>> Lock irq_desc
>> Set IRQD_IN_PROGRESS
>> Unlock irq_desc
>> Lock irq_desc
>> Change LPI Affinity
>> Unlock irq_desc
>> Call irq_handler
>> Generate IRQ#2
>> READ_IAR
>> Lock irq_desc
>> Check IRQD_IN_PROGRESS
>> Unlock irq_desc
>> Return from interrupt#2
>> Lock irq_desc
>> Clear IRQD_IN_PROGRESS
>> Unlock irq_desc
>> return from interrupt#1
>>
>> For this scenario, The IRQ#2 will be lost. This does cause some exceptions.
> Please see my reply to James at [1]. I'd appreciate if you could give
> that patch a go, which I expect to be a better avenue to fix what is
> effectively a GIC architecture defect.
>
> Thanks,
>
> M.
>
> [1] https://lore.kernel.org/all/86pm89kyyt.wl-maz@kernel.org/
Hi Marc:
Thanks for your time about this issue.
I have seen your latest patch, and I am preparing the environment
for testing, I will sent the testing status of this patch later.
--
Regards,
Yipeng Zou
Powered by blists - more mailing lists