[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iK4O+4RgFkaF0b9N6AFzZCGL8FjsNWBYNcr6MA2CaSRXw@mail.gmail.com>
Date: Tue, 18 Sep 2018 09:33:29 -0700
From: Eric Dumazet <edumazet@...gle.com>
To: songliubraving@...com
Cc: netdev <netdev@...r.kernel.org>,
Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
Alexander Duyck <alexander.h.duyck@...el.com>,
michael.chan@...adcom.com, kernel-team <Kernel-team@...com>
Subject: Re: pegged softirq and NAPI race (?)
On Tue, Sep 18, 2018 at 9:19 AM Song Liu <songliubraving@...com> wrote:
>
>
>
> > On Sep 18, 2018, at 6:45 AM, Eric Dumazet <edumazet@...gle.com> wrote:
> >
> > On Tue, Sep 18, 2018 at 1:41 AM Song Liu <songliubraving@...com> wrote:
> >>
> >> We are debugging this issue that netconsole message triggers pegged softirq
> >> (ksoftirqd taking 100% CPU for many seconds). We found this issue in
> >> production with both bnxt and ixgbe, on a 4.11 based kernel. This is easily
> >> reproducible with ixgbe on 4.11, and latest net/net-next (see [1] for more
> >> detail).
> >>
> >> After debugging for some time, we found that this issue is likely related
> >> to 39e6c8208d7b ("net: solve a NAPI race"). After reverting this commit,
> >> the steps described in [1] cannot reproduce the issue on ixgbe. Reverting
> >> this commit also reduces the chances we hit the issue with bnxt (it still
> >> happens with a lower rate).
> >>
> >> I tried to fix this issue with relaxed variant (or older version) of
> >> napi_schedule_prep() in netpoll, just like the one on napi_watchdog().
> >> However, my tests do not always go as expected.
> >>
> >> Please share your comments/suggestions on which direction shall we try
> >> to fix this.
> >>
> >> Thanks in advance!
> >> Song
> >>
> >>
> >> [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__www.spinics.net_lists_netdev_msg522328.html&d=DwIBaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=i6WobKxbeG3slzHSIOxTVtYIJw7qjCE6S0spDTKL-J4&m=iSaOapj1kxjhGYLgQr0Qd8mQCzVdobmgT1L4JwFvzxs&s=lCEhrz6wQJUUaJOkxFmtOszAgkf3Jh4reX_i1GbI5RI&e=
> >
> > You have not traced ixgbe to understand why driver hits
> > "clean_complete=false" all the time ?
>
> The trace showed that we got "clean_complete=false" because
> ixgbe_clean_rx_irq() used all budget (64). It feels like the driver
> is tricked to process old data on the rx_ring for one more time.
Process old data ???? That would be quite an horrible bug !
Probably ASAN would help here, detecting use-after-free or things like that.
Powered by blists - more mailing lists