[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iKd=RAKESnYiZeXPN2Bo-Ta02NsohCPVvSc3z3noKZfmA@mail.gmail.com>
Date: Wed, 3 Mar 2021 10:55:12 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Wei Wang <weiwan@...gle.com>
Cc: Jakub Kicinski <kuba@...nel.org>,
"David S . Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Martin Zaharinov <micron10@...il.com>,
Alexander Duyck <alexanderduyck@...com>,
Paolo Abeni <pabeni@...hat.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>
Subject: Re: [PATCH net v3] net: fix race between napi kthread mode and busy poll
On Wed, Mar 3, 2021 at 10:53 AM Eric Dumazet <edumazet@...gle.com> wrote:
>
> On Tue, Mar 2, 2021 at 2:21 AM Wei Wang <weiwan@...gle.com> wrote:
> >
> > Currently, napi_thread_wait() checks for NAPI_STATE_SCHED bit to
> > determine if the kthread owns this napi and could call napi->poll() on
> > it. However, if socket busy poll is enabled, it is possible that the
> > busy poll thread grabs this SCHED bit (after the previous napi->poll()
> > invokes napi_complete_done() and clears SCHED bit) and tries to poll
> > on the same napi. napi_disable() could grab the SCHED bit as well.
> > This patch tries to fix this race by adding a new bit
> > NAPI_STATE_SCHED_THREADED in napi->state. This bit gets set in
> > ____napi_schedule() if the threaded mode is enabled, and gets cleared
> > in napi_complete_done(), and we only poll the napi in kthread if this
> > bit is set. This helps distinguish the ownership of the napi between
> > kthread and other scenarios and fixes the race issue.
> >
> > Fixes: 29863d41bb6e ("net: implement threaded-able napi poll loop support")
> > Reported-by: Martin Zaharinov <micron10@...il.com>
> > Suggested-by: Jakub Kicinski <kuba@...nel.org>
> > Signed-off-by: Wei Wang <weiwan@...gle.com>
> > Cc: Alexander Duyck <alexanderduyck@...com>
> > Cc: Eric Dumazet <edumazet@...gle.com>
> > Cc: Paolo Abeni <pabeni@...hat.com>
> > Cc: Hannes Frederic Sowa <hannes@...essinduktion.org>
> > ---
> > include/linux/netdevice.h | 2 ++
> > net/core/dev.c | 14 +++++++++++++-
> > 2 files changed, 15 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> > index ddf4cfc12615..682908707c1a 100644
> > --- a/include/linux/netdevice.h
> > +++ b/include/linux/netdevice.h
> > @@ -360,6 +360,7 @@ enum {
> > NAPI_STATE_IN_BUSY_POLL, /* sk_busy_loop() owns this NAPI */
> > NAPI_STATE_PREFER_BUSY_POLL, /* prefer busy-polling over softirq processing*/
> > NAPI_STATE_THREADED, /* The poll is performed inside its own thread*/
> > + NAPI_STATE_SCHED_THREADED, /* Napi is currently scheduled in threaded mode */
> > };
> >
> > enum {
> > @@ -372,6 +373,7 @@ enum {
> > NAPIF_STATE_IN_BUSY_POLL = BIT(NAPI_STATE_IN_BUSY_POLL),
> > NAPIF_STATE_PREFER_BUSY_POLL = BIT(NAPI_STATE_PREFER_BUSY_POLL),
> > NAPIF_STATE_THREADED = BIT(NAPI_STATE_THREADED),
> > + NAPIF_STATE_SCHED_THREADED = BIT(NAPI_STATE_SCHED_THREADED),
> > };
> >
> > enum gro_result {
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index 6c5967e80132..03c4763de351 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -4294,6 +4294,8 @@ static inline void ____napi_schedule(struct softnet_data *sd,
> > */
> > thread = READ_ONCE(napi->thread);
> > if (thread) {
> > + if (thread->state != TASK_INTERRUPTIBLE)
>
> How safe is this read ?
>
> Presumably KMSAN will detect that another cpu/thread is able to change
> thread->state under us,
> so a READ_ONCE() (or data_race()) would be needed.
>
Of course I meant KCSAN here.
> Nowhere else in the kernel can we find a similar construct, I find
> unfortunate to bury
> in net/core/dev.c something that might be incorrect in the future.
>
> > + set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
> > wake_up_process(thread);
> > return;
> > }
> > @@ -6486,6 +6488,7 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
> > WARN_ON_ONCE(!(val & NAPIF_STATE_SCHED));
> >
> > new = val & ~(NAPIF_STATE_MISSED | NAPIF_STATE_SCHED |
> > + NAPIF_STATE_SCHED_THREADED |
> > NAPIF_STATE_PREFER_BUSY_POLL);
> >
> > /* If STATE_MISSED was set, leave STATE_SCHED set,
> > @@ -6968,16 +6971,25 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll)
> >
> > static int napi_thread_wait(struct napi_struct *napi)
> > {
> > + bool woken = false;
> > +
> > set_current_state(TASK_INTERRUPTIBLE);
> >
> > while (!kthread_should_stop() && !napi_disable_pending(napi)) {
> > - if (test_bit(NAPI_STATE_SCHED, &napi->state)) {
> > + /* Testing SCHED_THREADED bit here to make sure the current
> > + * kthread owns this napi and could poll on this napi.
> > + * Testing SCHED bit is not enough because SCHED bit might be
> > + * set by some other busy poll thread or by napi_disable().
> > + */
> > + if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state) || woken) {
> > WARN_ON(!list_empty(&napi->poll_list));
> > __set_current_state(TASK_RUNNING);
> > return 0;
> > }
> >
> > schedule();
> > + /* woken being true indicates this thread owns this napi. */
> > + woken = true;
> > set_current_state(TASK_INTERRUPTIBLE);
> > }
> > __set_current_state(TASK_RUNNING);
> > --
> > 2.30.1.766.gb4fecdf3b7-goog
> >
Powered by blists - more mailing lists