[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fa860d02-0310-2666-1043-6909dc68ea01@huawei.com>
Date: Wed, 29 Mar 2023 20:47:23 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Eric Dumazet <edumazet@...gle.com>,
"David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
CC: Jason Xing <kernelxing@...cent.com>, <netdev@...r.kernel.org>,
<eric.dumazet@...il.com>
Subject: Re: [PATCH net-next 4/4] net: optimize ____napi_schedule() to avoid
extra NET_RX_SOFTIRQ
On 2023/3/29 7:50, Eric Dumazet wrote:
> ____napi_schedule() adds a napi into current cpu softnet_data poll_list,
> then raises NET_RX_SOFTIRQ to make sure net_rx_action() will process it.
>
> Idea of this patch is to not raise NET_RX_SOFTIRQ when being called indirectly
> from net_rx_action(), because we can process poll_list from this point,
> without going to full softirq loop.
>
> This needs a change in net_rx_action() to make sure we restart
> its main loop if sd->poll_list was updated without NET_RX_SOFTIRQ
> being raised.
>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Cc: Jason Xing <kernelxing@...cent.com>
> ---
> net/core/dev.c | 22 ++++++++++++++++++----
> 1 file changed, 18 insertions(+), 4 deletions(-)
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index f34ce93f2f02e7ec71f5e84d449fa99b7a882f0c..0c4b21291348d4558f036fb05842dab023f65dc3 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -4360,7 +4360,11 @@ static inline void ____napi_schedule(struct softnet_data *sd,
> }
>
> list_add_tail(&napi->poll_list, &sd->poll_list);
> - __raise_softirq_irqoff(NET_RX_SOFTIRQ);
> + /* If not called from net_rx_action()
> + * we have to raise NET_RX_SOFTIRQ.
> + */
> + if (!sd->in_net_rx_action)
It seems sd->in_net_rx_action may be read/writen by different CPUs at the same
time, does it need something like READ_ONCE()/WRITE_ONCE() to access
sd->in_net_rx_action?
> + __raise_softirq_irqoff(NET_RX_SOFTIRQ);
> }
>
> #ifdef CONFIG_RPS
> @@ -6648,6 +6652,7 @@ static __latent_entropy void net_rx_action(struct softirq_action *h)
> LIST_HEAD(list);
> LIST_HEAD(repoll);
>
> +start:
> sd->in_net_rx_action = true;
> local_irq_disable();
> list_splice_init(&sd->poll_list, &list);
> @@ -6659,9 +6664,18 @@ static __latent_entropy void net_rx_action(struct softirq_action *h)
> skb_defer_free_flush(sd);
>
> if (list_empty(&list)) {
> - sd->in_net_rx_action = false;
> - if (!sd_has_rps_ipi_waiting(sd) && list_empty(&repoll))
> - goto end;
> + if (list_empty(&repoll)) {
> + sd->in_net_rx_action = false;
> + barrier();
Do we need a stronger barrier to prevent out-of-order execution
from cpu?
Do we need a barrier between list_add_tail() and sd->in_net_rx_action
checking in ____napi_schedule() to pair with the above barrier?
> + /* We need to check if ____napi_schedule()
> + * had refilled poll_list while
> + * sd->in_net_rx_action was true.
> + */
> + if (!list_empty(&sd->poll_list))
> + goto start;
> + if (!sd_has_rps_ipi_waiting(sd))
> + goto end;
> + }
> break;
> }
>
>
Powered by blists - more mailing lists