lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bb026982d93d8bc9cf87257fcded543eb9cb4e8a.camel@fb.com>
Date:   Tue, 18 Sep 2018 16:31:09 +0000
From:   Rik van Riel <riel@...com>
To:     Song Liu <songliubraving@...com>,
        Eric Dumazet <edumazet@...gle.com>
CC:     netdev <netdev@...r.kernel.org>,
        Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
        Alexander Duyck <alexander.h.duyck@...el.com>,
        "michael.chan@...adcom.com" <michael.chan@...adcom.com>,
        Kernel Team <Kernel-team@...com>
Subject: Re: pegged softirq and NAPI race (?)

On Tue, 2018-09-18 at 12:19 -0400, Song Liu wrote:
> > On Sep 18, 2018, at 6:45 AM, Eric Dumazet <edumazet@...gle.com>
> > wrote:
> > 
> > On Tue, Sep 18, 2018 at 1:41 AM Song Liu <songliubraving@...com>
> > wrote:
> > > 
> > > We are debugging this issue that netconsole message triggers
> > > pegged softirq
> > > (ksoftirqd taking 100% CPU for many seconds). We found this issue
> > > in
> > > production with both bnxt and ixgbe, on a 4.11 based kernel. This
> > > is easily
> > > reproducible with ixgbe on 4.11, and latest net/net-next (see [1]
> > > for more
> > > detail).
> > > 
> > > After debugging for some time, we found that this issue is likely
> > > related
> > > to 39e6c8208d7b ("net: solve a NAPI race"). After reverting this
> > > commit,
> > > the steps described in [1] cannot reproduce the issue on ixgbe.
> > > Reverting
> > > this commit also reduces the chances we hit the issue with bnxt
> > > (it still
> > > happens with a lower rate).
> > > 
> > > I tried to fix this issue with relaxed variant (or older version)
> > > of
> > > napi_schedule_prep() in netpoll, just like the one on
> > > napi_watchdog().
> > > However, my tests do not always go as expected.
> > > 
> > > Please share your comments/suggestions on which direction shall
> > > we try
> > > to fix this.
> > > 
> > > Thanks in advance!
> > > Song
> > > 
> > > 
> > > [1] 
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.spinics.net_lists_netdev_msg522328.html&d=DwIBaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=i6WobKxbeG3slzHSIOxTVtYIJw7qjCE6S0spDTKL-J4&m=iSaOapj1kxjhGYLgQr0Qd8mQCzVdobmgT1L4JwFvzxs&s=lCEhrz6wQJUUaJOkxFmtOszAgkf3Jh4reX_i1GbI5RI&e=
> > 
> > You have not traced ixgbe to understand why driver hits
> > "clean_complete=false" all the time ?
> 
> The trace showed that we got "clean_complete=false" because 
> ixgbe_clean_rx_irq() used all budget (64). It feels like the driver
> is tricked to process old data on the rx_ring for one more time. 
> 
> Have you seen similar issue?

A quick reading of the code suggests that means
polling cannot keep up with the rate of incoming
packets.

That should not be a surprise, given that polling
appears to happen on just one CPU, while interrupt
driven packet delivery was fanned out across a
larger number of CPUs.

Does the NAPI code have any way in which it
periodically force-returns to IRQ mode, because
multiple CPUs in IRQ mode can keep up with packets
better than a single CPU in polling mode?

Alternatively, is NAPI with multi-queue network
adapters supposed to be polling on multiple CPUs,
but simply failing to do so in this case?


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ