lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6FA4008E-CEEB-4EAB-BAD8-267D41574248@fb.com>
Date:   Tue, 18 Sep 2018 16:19:13 +0000
From:   Song Liu <songliubraving@...com>
To:     Eric Dumazet <edumazet@...gle.com>
CC:     netdev <netdev@...r.kernel.org>,
        Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
        Alexander Duyck <alexander.h.duyck@...el.com>,
        "michael.chan@...adcom.com" <michael.chan@...adcom.com>,
        Kernel Team <Kernel-team@...com>
Subject: Re: pegged softirq and NAPI race (?) 



> On Sep 18, 2018, at 6:45 AM, Eric Dumazet <edumazet@...gle.com> wrote:
> 
> On Tue, Sep 18, 2018 at 1:41 AM Song Liu <songliubraving@...com> wrote:
>> 
>> We are debugging this issue that netconsole message triggers pegged softirq
>> (ksoftirqd taking 100% CPU for many seconds). We found this issue in
>> production with both bnxt and ixgbe, on a 4.11 based kernel. This is easily
>> reproducible with ixgbe on 4.11, and latest net/net-next (see [1] for more
>> detail).
>> 
>> After debugging for some time, we found that this issue is likely related
>> to 39e6c8208d7b ("net: solve a NAPI race"). After reverting this commit,
>> the steps described in [1] cannot reproduce the issue on ixgbe. Reverting
>> this commit also reduces the chances we hit the issue with bnxt (it still
>> happens with a lower rate).
>> 
>> I tried to fix this issue with relaxed variant (or older version) of
>> napi_schedule_prep() in netpoll, just like the one on napi_watchdog().
>> However, my tests do not always go as expected.
>> 
>> Please share your comments/suggestions on which direction shall we try
>> to fix this.
>> 
>> Thanks in advance!
>> Song
>> 
>> 
>> [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__www.spinics.net_lists_netdev_msg522328.html&d=DwIBaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=i6WobKxbeG3slzHSIOxTVtYIJw7qjCE6S0spDTKL-J4&m=iSaOapj1kxjhGYLgQr0Qd8mQCzVdobmgT1L4JwFvzxs&s=lCEhrz6wQJUUaJOkxFmtOszAgkf3Jh4reX_i1GbI5RI&e=
> 
> You have not traced ixgbe to understand why driver hits
> "clean_complete=false" all the time ?

The trace showed that we got "clean_complete=false" because 
ixgbe_clean_rx_irq() used all budget (64). It feels like the driver
is tricked to process old data on the rx_ring for one more time. 

Have you seen similar issue?

Thanks,
Song

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ