[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170228081702.35ba7a6a@xeon-e3>
Date: Tue, 28 Feb 2017 08:17:02 -0800
From: Stephen Hemminger <stephen@...workplumber.org>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Miller <davem@...emloft.net>, netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH v3 net] net: solve a NAPI race
On Mon, 27 Feb 2017 12:18:31 -0800
Eric Dumazet <eric.dumazet@...il.com> wrote:
> This can happen with busy polling users, or if gro_flush_timeout is
> used. But some other uses of napi_schedule() in drivers can cause this
> as well.
Where were IRQ's re-enabled?
> thread 1 thread 2 (could be on same cpu)
>
> // busy polling or napi_watchdog()
> napi_schedule();
> ...
> napi->poll()
>
> device polling:
> read 2 packets from ring buffer
> Additional 3rd packet is available.
> device hard irq
>
> // does nothing because NAPI_STATE_SCHED bit is owned by thread 1
> napi_schedule();
>
> napi_complete_done(napi, 2);
> rearm_irq();
Maybe just as simple as using irqsave/irqrestore in driver.
Powered by blists - more mailing lists