[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iJTEMjYepKbr-8pmk0i03d9D+CfDFLPx+J=fqZivDJ9zQ@mail.gmail.com>
Date: Tue, 18 Sep 2018 11:17:44 -0700
From: Eric Dumazet <edumazet@...gle.com>
To: Alexei Starovoitov <ast@...com>
Cc: songliubraving@...com, netdev <netdev@...r.kernel.org>,
Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
Alexander Duyck <alexander.h.duyck@...el.com>,
michael.chan@...adcom.com, kernel-team <Kernel-team@...com>
Subject: Re: pegged softirq and NAPI race (?)
On Tue, Sep 18, 2018 at 10:51 AM Alexei Starovoitov <ast@...com> wrote:
>
> On 9/18/18 6:45 AM, Eric Dumazet wrote:
> > On Tue, Sep 18, 2018 at 1:41 AM Song Liu <songliubraving@...com> wrote:
> >>
> >> We are debugging this issue that netconsole message triggers pegged softirq
> >> (ksoftirqd taking 100% CPU for many seconds). We found this issue in
> >> production with both bnxt and ixgbe, on a 4.11 based kernel. This is easily
> >> reproducible with ixgbe on 4.11, and latest net/net-next (see [1] for more
> >> detail).
> >>
> >> After debugging for some time, we found that this issue is likely related
> >> to 39e6c8208d7b ("net: solve a NAPI race"). After reverting this commit,
> >> the steps described in [1] cannot reproduce the issue on ixgbe. Reverting
> >> this commit also reduces the chances we hit the issue with bnxt (it still
> >> happens with a lower rate).
> >>
> >> I tried to fix this issue with relaxed variant (or older version) of
> >> napi_schedule_prep() in netpoll, just like the one on napi_watchdog().
> >> However, my tests do not always go as expected.
> >>
> >> Please share your comments/suggestions on which direction shall we try
> >> to fix this.
> >>
> >> Thanks in advance!
> >> Song
> >>
> >>
> >> [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__www.spinics.net_lists_netdev_msg522328.html&d=DwIBaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=i6WobKxbeG3slzHSIOxTVtYIJw7qjCE6S0spDTKL-J4&m=iSaOapj1kxjhGYLgQr0Qd8mQCzVdobmgT1L4JwFvzxs&s=lCEhrz6wQJUUaJOkxFmtOszAgkf3Jh4reX_i1GbI5RI&e=
> >
> > You have not traced ixgbe to understand why driver hits
> > "clean_complete=false" all the time ?
>
> Eric,
>
> I'm looking at commit 39e6c8208d7b and wondering that it's doing
> clear_bit(NAPI_STATE_MISSED,..);
> for busy_poll_stop(), but not for netpoll.
> Can that be an issue?
>
> and then something like below is needed:
> diff --git a/net/core/netpoll.c b/net/core/netpoll.c
> index 57557a6a950c..a848be6b503c 100644
> --- a/net/core/netpoll.c
> +++ b/net/core/netpoll.c
> @@ -172,6 +172,7 @@ static void poll_one_napi(struct napi_struct *napi)
> trace_napi_poll(napi, work, 0);
>
> clear_bit(NAPI_STATE_NPSVC, &napi->state);
> + clear_bit(NAPI_STATE_MISSED, &napi->state);
> }
NAPI_STATE_MISSED should only be cleared under strict circumstances.
The clear in busy_poll_stop() is an optimization really (as explained
in the comment)
It is cleared when napi_complete_done() is eventually called, but if
ixgbe always handle 64 RX frames in its poll function,
napi_complete_done() will not be called. The bug is in ixgbe,
pretending its poll function should be called forever.
Powered by blists - more mailing lists