[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4f8f3958-2976-b0a7-8d17-440ecaba0fc8@huawei.com>
Date: Mon, 2 Mar 2020 16:18:07 +0800
From: Zenghui Yu <yuzenghui@...wei.com>
To: Marc Zyngier <maz@...nel.org>
CC: <linux-arm-kernel@...ts.infradead.org>,
<kvmarm@...ts.cs.columbia.edu>, <kvm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Jason Cooper <jason@...edaemon.net>,
"Robert Richter" <rrichter@...vell.com>,
Thomas Gleixner <tglx@...utronix.de>,
"Eric Auger" <eric.auger@...hat.com>,
James Morse <james.morse@....com>,
"Julien Thierry" <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>
Subject: Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state
SGI callbacks
On 2020/3/2 3:00, Marc Zyngier wrote:
> On 2020-02-28 19:37, Marc Zyngier wrote:
>> On 2020-02-20 03:11, Zenghui Yu wrote:
>
>>> Do we really need to grab the vpe_lock for those which are belong to
>>> the same irqchip with its_vpe_set_affinity()? The IRQ core code should
>>> already ensure the mutual exclusion among them, wrong?
>>
>> I've been trying to think about that, but jet-lag keeps getting in the
>> way.
>> I empirically think that you are right, but I need to go and check the
>> various
>> code paths to be sure. Hopefully I'll have a bit more brain space next
>> week.
>
> So I slept on it and came back to my senses. The only case we actually need
> to deal with is when an affinity change impacts *another* interrupt.
>
> There is only two instances of this issue:
>
> - vLPIs have their *physical* affinity impacted by the affinity of the
> vPE. Their virtual affinity is of course unchanged, but the physical
> one becomes important with direct invalidation. Taking a per-VPE lock
> in such context should address the issue.
>
> - vSGIs have the exact same issue, plus the matter of requiring some
> *extra* one when reading the pending state, which requires a RMW
> on two different registers. This requires an extra per-RD lock.
Agreed with both!
>
> My original patch was stupidly complex, and the irq_desc lock is
> perfectly enough to deal with anything that only affects the interrupt
> state itself.
>
> GICv4 + direct invalidation for vLPIs breaks this by bypassing the
> serialization initially provided by the ITS, as the RD is completely
> out of band. The per-vPE lock brings back this serialization.
>
> I've updated the branch, which seems to run OK on D05. I still need
> to run the usual tests on the FVP model though.
I have pulled the latest branch and it looks good to me, except for
one remaining concern:
GICR_INV{LPI, ALL}R + GICR_SYNCR can also be accessed concurrently
by multiple direct invalidation, should we also use the per-RD lock
to ensure mutual exclusion? It looks not so harmful though, as this
will only increase one's polling time against the Busy bit (in my view).
But I point it out again for confirmation.
Thanks,
Zenghui
Powered by blists - more mailing lists