[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <db819547d4be8daa458bcd56aac2efcd@kernel.org>
Date: Mon, 02 Mar 2020 12:09:33 +0000
From: Marc Zyngier <maz@...nel.org>
To: Zenghui Yu <yuzenghui@...wei.com>
Cc: linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Jason Cooper <jason@...edaemon.net>,
Robert Richter <rrichter@...vell.com>,
Thomas Gleixner <tglx@...utronix.de>,
Eric Auger <eric.auger@...hat.com>,
James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>
Subject: Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state
SGI callbacks
Hi Zenghui,
On 2020-03-02 08:18, Zenghui Yu wrote:
> On 2020/3/2 3:00, Marc Zyngier wrote:
>> On 2020-02-28 19:37, Marc Zyngier wrote:
>>> On 2020-02-20 03:11, Zenghui Yu wrote:
>>
>>>> Do we really need to grab the vpe_lock for those which are belong to
>>>> the same irqchip with its_vpe_set_affinity()? The IRQ core code
>>>> should
>>>> already ensure the mutual exclusion among them, wrong?
>>>
>>> I've been trying to think about that, but jet-lag keeps getting in
>>> the way.
>>> I empirically think that you are right, but I need to go and check
>>> the various
>>> code paths to be sure. Hopefully I'll have a bit more brain space
>>> next week.
>>
>> So I slept on it and came back to my senses. The only case we actually
>> need
>> to deal with is when an affinity change impacts *another* interrupt.
>>
>> There is only two instances of this issue:
>>
>> - vLPIs have their *physical* affinity impacted by the affinity of the
>> vPE. Their virtual affinity is of course unchanged, but the
>> physical
>> one becomes important with direct invalidation. Taking a per-VPE
>> lock
>> in such context should address the issue.
>>
>> - vSGIs have the exact same issue, plus the matter of requiring some
>> *extra* one when reading the pending state, which requires a RMW
>> on two different registers. This requires an extra per-RD lock.
>
> Agreed with both!
>
>>
>> My original patch was stupidly complex, and the irq_desc lock is
>> perfectly enough to deal with anything that only affects the interrupt
>> state itself.
>>
>> GICv4 + direct invalidation for vLPIs breaks this by bypassing the
>> serialization initially provided by the ITS, as the RD is completely
>> out of band. The per-vPE lock brings back this serialization.
>>
>> I've updated the branch, which seems to run OK on D05. I still need
>> to run the usual tests on the FVP model though.
>
> I have pulled the latest branch and it looks good to me, except for
> one remaining concern:
>
> GICR_INV{LPI, ALL}R + GICR_SYNCR can also be accessed concurrently
> by multiple direct invalidation, should we also use the per-RD lock
> to ensure mutual exclusion? It looks not so harmful though, as this
> will only increase one's polling time against the Busy bit (in my
> view).
>
> But I point it out again for confirmation.
I was about to say that it doesn't really matter because it is only a
performance optimisation (and we're noty quite there yet), until I
spotted
this great nugget in the spec:
<quote>
Writing GICR_INVLPIR or GICR_INVALLR when GICR_SYNCR.Busy==1 is
CONSTRAINED
UNPREDICTABLE:
- The write is IGNORED .
- The invalidate specified by the write is performed.
</quote>
So we really need some form of mutual exclusion on a per-RD basis to
ensure
that no two invalidations occur at the same time, ensuring that Busy
clears
between the two.
Thanks for the heads up,
M.
--
Jazz is not dead. It just smells funny...
Powered by blists - more mailing lists