[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <92aed1ab-efa3-c667-7f20-8a2b8fc67469@gmail.com>
Date: Sun, 18 Oct 2020 19:57:53 +0200
From: Heiner Kallweit <hkallweit1@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Eric Dumazet <edumazet@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Eric Dumazet <eric.dumazet@...il.com>,
David Miller <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: Remove __napi_schedule_irqoff?
On 18.10.2020 19:19, Jakub Kicinski wrote:
> On Sun, 18 Oct 2020 10:20:41 +0200 Heiner Kallweit wrote:
>>>> Otherwise a non-solution could be to make IRQ_FORCED_THREADING
>>>> configurable.
>>>
>>> I have to say I do not understand why we want to defer to a thread the
>>> hard IRQ that we use in NAPI model.
>>>
>> Seems like the current forced threading comes with the big hammer and
>> thread-ifies all hard irq's. To avoid this all NAPI network drivers
>> would have to request the interrupt with IRQF_NO_THREAD.
>
> Right, it'd work for some drivers. Other drivers try to take spin locks
> in their IRQ handlers.
>
> What gave me a pause was that we have a busy loop in napi_schedule_prep:
>
> bool napi_schedule_prep(struct napi_struct *n)
> {
> unsigned long val, new;
>
> do {
> val = READ_ONCE(n->state);
> if (unlikely(val & NAPIF_STATE_DISABLE))
> return false;
> new = val | NAPIF_STATE_SCHED;
>
> /* Sets STATE_MISSED bit if STATE_SCHED was already set
> * This was suggested by Alexander Duyck, as compiler
> * emits better code than :
> * if (val & NAPIF_STATE_SCHED)
> * new |= NAPIF_STATE_MISSED;
> */
> new |= (val & NAPIF_STATE_SCHED) / NAPIF_STATE_SCHED *
> NAPIF_STATE_MISSED;
> } while (cmpxchg(&n->state, val, new) != val);
>
> return !(val & NAPIF_STATE_SCHED);
> }
>
>
> Dunno how acceptable this is to run in an IRQ handler on RT..
>
If I understand this code right then it's not a loop that actually
waits for something. It just retries if the value of n->state has
changed in between. So I don't think we'll ever see the loop being
executed more than twice.
Powered by blists - more mailing lists