[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0FD562CC-CDE9-43C8-9623-B42AC7A208C8@fb.com>
Date: Tue, 18 Sep 2018 20:37:45 +0000
From: Song Liu <songliubraving@...com>
To: Eric Dumazet <edumazet@...gle.com>
CC: Alexei Starovoitov <ast@...com>, netdev <netdev@...r.kernel.org>,
"Jeff Kirsher" <jeffrey.t.kirsher@...el.com>,
Alexander Duyck <alexander.h.duyck@...el.com>,
"michael.chan@...adcom.com" <michael.chan@...adcom.com>,
Kernel Team <Kernel-team@...com>
Subject: Re: pegged softirq and NAPI race (?)
> On Sep 18, 2018, at 11:17 AM, Eric Dumazet <edumazet@...gle.com> wrote:
>
> On Tue, Sep 18, 2018 at 10:51 AM Alexei Starovoitov <ast@...com> wrote:
>>
>> On 9/18/18 6:45 AM, Eric Dumazet wrote:
>>> On Tue, Sep 18, 2018 at 1:41 AM Song Liu <songliubraving@...com> wrote:
>>>>
>>>> We are debugging this issue that netconsole message triggers pegged softirq
>>>> (ksoftirqd taking 100% CPU for many seconds). We found this issue in
>>>> production with both bnxt and ixgbe, on a 4.11 based kernel. This is easily
>>>> reproducible with ixgbe on 4.11, and latest net/net-next (see [1] for more
>>>> detail).
>>>>
>>>> After debugging for some time, we found that this issue is likely related
>>>> to 39e6c8208d7b ("net: solve a NAPI race"). After reverting this commit,
>>>> the steps described in [1] cannot reproduce the issue on ixgbe. Reverting
>>>> this commit also reduces the chances we hit the issue with bnxt (it still
>>>> happens with a lower rate).
>>>>
>>>> I tried to fix this issue with relaxed variant (or older version) of
>>>> napi_schedule_prep() in netpoll, just like the one on napi_watchdog().
>>>> However, my tests do not always go as expected.
>>>>
>>>> Please share your comments/suggestions on which direction shall we try
>>>> to fix this.
>>>>
>>>> Thanks in advance!
>>>> Song
>>>>
>>>>
>>>> [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__www.spinics.net_lists_netdev_msg522328.html&d=DwIBaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=i6WobKxbeG3slzHSIOxTVtYIJw7qjCE6S0spDTKL-J4&m=iSaOapj1kxjhGYLgQr0Qd8mQCzVdobmgT1L4JwFvzxs&s=lCEhrz6wQJUUaJOkxFmtOszAgkf3Jh4reX_i1GbI5RI&e=
>>>
>>> You have not traced ixgbe to understand why driver hits
>>> "clean_complete=false" all the time ?
>>
>> Eric,
>>
>> I'm looking at commit 39e6c8208d7b and wondering that it's doing
>> clear_bit(NAPI_STATE_MISSED,..);
>> for busy_poll_stop(), but not for netpoll.
>> Can that be an issue?
>>
>> and then something like below is needed:
>> diff --git a/net/core/netpoll.c b/net/core/netpoll.c
>> index 57557a6a950c..a848be6b503c 100644
>> --- a/net/core/netpoll.c
>> +++ b/net/core/netpoll.c
>> @@ -172,6 +172,7 @@ static void poll_one_napi(struct napi_struct *napi)
>> trace_napi_poll(napi, work, 0);
>>
>> clear_bit(NAPI_STATE_NPSVC, &napi->state);
>> + clear_bit(NAPI_STATE_MISSED, &napi->state);
>> }
>
>
> NAPI_STATE_MISSED should only be cleared under strict circumstances.
>
> The clear in busy_poll_stop() is an optimization really (as explained
> in the comment)
>
> It is cleared when napi_complete_done() is eventually called, but if
> ixgbe always handle 64 RX frames in its poll function,
> napi_complete_done() will not be called. The bug is in ixgbe,
> pretending its poll function should be called forever.
Looks like a patch like the following fixes the issue for ixgbe. But I
cannot explain it yet.
Does this ring a bell?
Thanks,
Song
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 787c84fb20dd..51611f799dae 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -3059,11 +3059,14 @@ static irqreturn_t ixgbe_msix_other(int irq, void *data)
static irqreturn_t ixgbe_msix_clean_rings(int irq, void *data)
{
struct ixgbe_q_vector *q_vector = data;
+ struct napi_struct *napi = &q_vector->napi;
/* EIAM disabled interrupts (on this vector) for us */
- if (q_vector->rx.ring || q_vector->tx.ring)
- napi_schedule_irqoff(&q_vector->napi);
+ if ((q_vector->rx.ring || q_vector->tx.ring) &&
+ !napi_disable_pending(napi) &&
+ !test_and_set_bit(NAPI_STATE_SCHED, &napi->state))
+ __napi_schedule_irqoff(napi);
return IRQ_HANDLED;
}
Powered by blists - more mailing lists