[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8610abc4-65c6-6808-e5d4-c2da8083595a@huawei.com>
Date: Thu, 30 Mar 2023 10:33:43 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Eric Dumazet <edumazet@...gle.com>
CC: "David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Jason Xing <kernelxing@...cent.com>, <netdev@...r.kernel.org>,
<eric.dumazet@...il.com>
Subject: Re: [PATCH net-next 4/4] net: optimize ____napi_schedule() to avoid
extra NET_RX_SOFTIRQ
On 2023/3/29 23:47, Eric Dumazet wrote:
> On Wed, Mar 29, 2023 at 2:47 PM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>>
>> On 2023/3/29 7:50, Eric Dumazet wrote:
>>> ____napi_schedule() adds a napi into current cpu softnet_data poll_list,
>>> then raises NET_RX_SOFTIRQ to make sure net_rx_action() will process it.
>>>
>>> Idea of this patch is to not raise NET_RX_SOFTIRQ when being called indirectly
>>> from net_rx_action(), because we can process poll_list from this point,
>>> without going to full softirq loop.
>>>
>>> This needs a change in net_rx_action() to make sure we restart
>>> its main loop if sd->poll_list was updated without NET_RX_SOFTIRQ
>>> being raised.
>>>
>>> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
>>> Cc: Jason Xing <kernelxing@...cent.com>
>>> ---
>>> net/core/dev.c | 22 ++++++++++++++++++----
>>> 1 file changed, 18 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/net/core/dev.c b/net/core/dev.c
>>> index f34ce93f2f02e7ec71f5e84d449fa99b7a882f0c..0c4b21291348d4558f036fb05842dab023f65dc3 100644
>>> --- a/net/core/dev.c
>>> +++ b/net/core/dev.c
>>> @@ -4360,7 +4360,11 @@ static inline void ____napi_schedule(struct softnet_data *sd,
>>> }
>>>
>>> list_add_tail(&napi->poll_list, &sd->poll_list);
>>> - __raise_softirq_irqoff(NET_RX_SOFTIRQ);
>>> + /* If not called from net_rx_action()
>>> + * we have to raise NET_RX_SOFTIRQ.
>>> + */
>>> + if (!sd->in_net_rx_action)
>>
>> It seems sd->in_net_rx_action may be read/writen by different CPUs at the same
>> time, does it need something like READ_ONCE()/WRITE_ONCE() to access
>> sd->in_net_rx_action?
>
> You probably missed the 2nd patch, explaining the in_net_rx_action is
> only read and written by the current (owning the percpu var) cpu.
>
> Have you found a caller that is not providing to ____napi_schedule( sd
> = this_cpu_ptr(&softnet_data)) ?
You are right.
The one small problem I see is that ____napi_schedule() call by a irq handle
may preempt the running net_rx_action() in the current cpu, I am not sure if
it worth handling, given that it is expected that the irq should be disabled
when net_rx_action() is running?
Do we need to protect against buggy hw or unbehaved driver?
>
>
>
>>
>>> + __raise_softirq_irqoff(NET_RX_SOFTIRQ);
>>> }
>>>
>>> #ifdef CONFIG_RPS
>>> @@ -6648,6 +6652,7 @@ static __latent_entropy void net_rx_action(struct softirq_action *h)
>>> LIST_HEAD(list);
>>> LIST_HEAD(repoll);
>>>
>>> +start:
>>> sd->in_net_rx_action = true;
>>> local_irq_disable();
>>> list_splice_init(&sd->poll_list, &list);
>>> @@ -6659,9 +6664,18 @@ static __latent_entropy void net_rx_action(struct softirq_action *h)
>>> skb_defer_free_flush(sd);
>>>
>>> if (list_empty(&list)) {
>>> - sd->in_net_rx_action = false;
>>> - if (!sd_has_rps_ipi_waiting(sd) && list_empty(&repoll))
>>> - goto end;
>>> + if (list_empty(&repoll)) {
>>> + sd->in_net_rx_action = false;
>>> + barrier();
>>
>> Do we need a stronger barrier to prevent out-of-order execution
>> from cpu?
>
> We do not need more than barrier() to make sure local cpu wont move this
> write after the following code.
Is there any reason why we need the barrier() if we are not depending
on how list_empty() is coded?
It seems not obvious to me at least:)
>
> It should not, even without the barrier(), because of the way
> list_empty() is coded,
> but a barrier() makes things a bit more explicit.
In that case, a comment explaining that may help a lot.
Thanks.
>
>> Do we need a barrier between list_add_tail() and sd->in_net_rx_action
>> checking in ____napi_schedule() to pair with the above barrier?
>
> I do not think so.
>
> While in ____napi_schedule(), sd->in_net_rx_action is stable
> because we run with hardware IRQ masked.
>
> Thanks.
>
>
>
Powered by blists - more mailing lists