[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <A0F7B0A1-0441-47A4-A045-07604A6C1842@amazon.com>
Date: Thu, 9 Mar 2023 08:56:18 +0000
From: "Krcka, Tomas" <krckatom@...zon.de>
To: Robin Murphy <robin.murphy@....com>
CC: "Krcka, Tomas" <krckatom@...zon.de>,
"baolu.lu@...ux.intel.com" <baolu.lu@...ux.intel.com>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"joro@...tes.org" <joro@...tes.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"shameerali.kolothum.thodi@...wei.com"
<shameerali.kolothum.thodi@...wei.com>,
"will@...nel.org" <will@...nel.org>
Subject: Re: [PATCH] iommu/arm-smmu-v3: Fix event queue overflow
acknowledgment
>
> On 2023-03-08 14:02, Tomas Krcka wrote:
>>>> When an overflow occurs in the event queue, the SMMU toggles overflow
>>>> flag OVFLG in the PROD register.
>>>> The evtq thread is supposed to acknowledge the overflow flag by toggling
>>>> flag OVACKFLG in the CONS register, otherwise the overflow condition is
>>>> still active (OVFLG != OVACKFLG).
>>>>
>>>> Currently the acknowledge register is toggled after clearing the event
>>>> queue but is never propagated to the hardware. It would be done next
>>>> time when executing evtq thread.
>>>>
>>>> The SMMU still adds elements to the queue when the overflow condition is
>>>> active but any subsequent overflow information after clearing the event
>>>> queue will be lost.
>>>>
>>>> This change keeps the SMMU in sync as it's expected by design.
>>>
>>> If I've understood correctly, the upshot of this is that if the queue
>>> has overflowed once, become empty, then somehow goes from empty to full
>>> before we manage to consume a single event, we won't print the "events
>>> lost" message a second time.
>>>
>>> Have you seen this happen in practice? TBH if the event queue ever
>>> overflows even once it's indicative that the system is hosed anyway, so
>>> it's not clear to me that there's any great loss of value in sometimes
>>> failing to repeat a warning for a chronic ongoing operational failure.
>>>
>>
>> Yes, I did see in practice. And it’s not just about loosing subsequence warning.
>> The way how it’s done now keeps inconsistent CONS register value between SMMU and the kernel
>> until any new event happens. The kernel doesn’t inform SMMU that we know about the overflow
>> and consuming events as fast as we can.
>
> Interesting - out of curiosity, is something blocking the IRQ thread
> from running in a timely manner, or are you just using a really tiny
> event queue?
Our case was the tiny event queue.
>
> Either way though, the point is that there is nothing to "inform" the
> SMMU about here. It will see that we're consuming events and making
> space in the queue, because we're still updating CONS.RD. All that an
> update of PROD.OVFLG serves to do is indicate to software that events
> have been discarded since the last time CONS.OVACKFLG was updated. It
> makes no difference to the SMMU if it continues to discard *more* events
> until software updates CONS.OVACKFLG again. It's entirely software's own
> decision how closely it wants to keep track of overflows.
>
> Like I say it's not clear how much Linux really cares about that, given
> that all we do with the information is log a message to indicate that
> some more events have been lost since the last time we logged the same
> message. Furthermore, the only thing we'll do with the overwhelming
> majority of events themselves is also log messages. Thus realistically
> if we're suddenly faced with processing a full event queue out of
> nowhere, then many of the events which *were* delivered to the queue
> will also be "lost" thanks to rate-limiting.
>
> FWIW I think it's still true that for our currently supported use-cases
> in Linux, *any* discardable event is a sign that something's gone wrong;
> a full queue of 32K events would already be a sign that something's gone
> *severely* wrong, so at that point knowing whether it was exactly 32K,
> or 32K + n for some indeterminate value of n, is unlikely to be
> significantly meaningful.
The issue I see with the current state is that local (kernel) copy of the CONS register
will be different from the SMMU state for an indefinite period of time, until we get new
event.
In the meantime, we cannot use the local copy as a value representing state in SMMU.
With or without this patch, it should be clearly visible.
Right now the kernel just prints warning messages.
If any change is implemented in the future, this state should be taken into account.
Syncing the CONS register immediately after the change would prevent any misunderstanding.
That is why I have posted this patch, maybe I should have clarified it better in the
commit message.
Anyway, do you think that we should at least find a way how to make the eventq
CONS.OVACKFLG sync workflow clearer ?
>
>>> It could be argued that we have a subtle inconsistency between
>>> arm_smmu_evtq_thread() and arm_smmu_priq_thread() here, but the fact is
>>> that the Event queue and PRI queue *do* have different overflow
>>> behaviours, so it could equally be argued that inconsistency in the code
>>> helps reflect that. FWIW I can't say I have a strong preference either way.
>>
>> For the argument that the code can reflect the difference.
>> Then the comment 'Sync our overflow flag, as we believe we're up to speed’ is
>> already misleading.
>
> Yes, that is what I was alluding to. Sometimes if a comment doesn't
> clearly match the code it means the code is wrong. Sometimes it just
> means the comment is wrong.
>
> I'm not saying this patch is the wrong answer, but as presented it
> hasn't managed to convince me that it's the right one either. Largely
> since I'm not 100% sure what the exact question is - even with this
> change we'd still have the same ABA problem whenever the queue overflows
> again *before* it's completely drained.
>
> Thanks,
> Robin.
Thank you.
Tomas
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
Powered by blists - more mailing lists