[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87h6ccukr9.ffs@tglx>
Date: Fri, 26 Jul 2024 19:27:06 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Pete Swain <swine@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] FIXUP: genirq: defuse spurious-irq timebomb
Pete!
Sorry for the delay.
On Sun, Jul 07 2024 at 20:39, Thomas Gleixner wrote:
> On Fri, Jun 14 2024 at 21:42, Pete Swain wrote:
>> The flapping-irq detector still has a timebomb.
>>
>> A pathological workload, or test script,
>> can arm the spurious-irq timebomb described in
>> 4f27c00bf80f ("Improve behaviour of spurious IRQ detect")
>>
>> This leads to irqs being moved the much slower polled mode,
>> despite the actual unhandled-irq rate being well under the
>> 99.9k/100k threshold that the code appears to check.
>>
>> How?
>> - Queued completion handler, like nvme, servicing events
>> as they appear in the queue, even if the irq corresponding
>> to the event has not yet been seen.
>>
>> - queues frequently empty, so seeing "spurious" irqs
>> whenever the last events of a threaded handler's
>> while (events_queued()) process_them();
>> ends with those events' irqs posted while thread was scanning.
>> In this case the while() has consumed last event(s),
>> so next handler says IRQ_NONE.
I'm still trying to understand the larger picture here. So what I decode
from your changelog is:
The threaded handler can drain the events. While doing so the
non-threaded handler returns WAKE_THREAD and because the threaded
handler does not return these hard interrupts are accounted as spurious.
>> - In each run of "unhandled" irqs, exactly one IRQ_NONE response
>> is promoted from IRQ_NONE to IRQ_HANDLED, by note_interrupt()'s
>> SPURIOUS_DEFERRED logic.
>>
>> - Any 2+ unhandled-irq runs will increment irqs_unhandled.
>> The time_after() check in note_interrupt() resets irqs_unhandled
>> to 1 after an idle period, but if irqs are never spaced more
>> than HZ/10 apart, irqs_unhandled keeps growing.
>>
>> - During processing of long completion queues, the non-threaded
>> handlers will return IRQ_WAKE_THREAD, for potentially thousands
>> of per-event irqs. These bypass note_interrupt()'s irq_count++ logic,
>> so do not count as handled, and do not invoke the flapping-irq
>> logic.
They cannot count as handled because they are not handling
anything. They only wake the thread and the thread handler is the one
which needs to decide whether it had something to handle or not.
>> - When the _counted_ irq_count reaches the 100k threshold,
>> it's possible for irqs_unhandled > 99.9k to force a move
>> to polling mode, even though many millions of _WAKE_THREAD
>> irqs have been handled without being counted.
>>
>> Solution: include IRQ_WAKE_THREAD events in irq_count.
>> Only when IRQ_NONE responses outweigh (IRQ_HANDLED + IRQ_WAKE_THREAD)
>> by the old 99:1 ratio will an irq be moved to polling mode.
>
> Nice detective work. Though I'm not entirely sure whether that's the
> correct approach as it might misjudge the situation where
> IRQ_WAKE_THREAD is issued but the thread does not make progress at all.
Ok. That won't happen because the SPURIOUS_DEFERRED bit stays set as
before.
Now looking deeper what your patch does. Contrary to the current code
the very first hard interrupt of a particular queue is accounted in
desc->irq_count.
Everything else stays the same:
- SPURIOUS_DEFERRED is sticky unless the hard interrupt handler returns
IRQ_HANDLED, which is not the case in the NVME scenario. So the
!SPURIOUS_DEFERRED code path is only taken once.
- Any consecutive hard interrupt which returns IRQ_WAKE_THREAD where
threads_handled == threads_handled_last is accounted as IRQ_NONE as
before.
- Any consecutive hard interrupt which returns IRQ_NONE is accounted
as IRQ_NONE as before.
I might be missing something of course, but I don't see what this change
fixes at all.
Thanks,
tglx
Powered by blists - more mailing lists