[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iKtD8xiedfvDEWOPQAPeqwDM0HxWqMYgk7C9Ar_gTcGOA@mail.gmail.com>
Date: Thu, 30 Mar 2023 05:15:46 +0200
From: Eric Dumazet <edumazet@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: "David S . Miller" <davem@...emloft.net>,
Paolo Abeni <pabeni@...hat.com>,
Jason Xing <kernelxing@...cent.com>, netdev@...r.kernel.org,
eric.dumazet@...il.com
Subject: Re: [PATCH net-next 0/4] net: rps/rfs improvements
On Thu, Mar 30, 2023 at 5:04 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Tue, 28 Mar 2023 23:50:17 +0000 Eric Dumazet wrote:
> > Overall, in an intensive RPC workload, with 32 TX/RX queues with RFS
> > I was able to observe a ~10% reduction of NET_RX_SOFTIRQ
> > invocations.
>
> small clarification on the testing:
>
> invocations == calls to net_rx_action()
> or
> invocations == calls to __raise_softirq_irqoff(NET_RX_SOFTIRQ)
This was from "grep NET_RX /proc/softirqs" (more exactly a tool
parsing /proc/softirqs)
So it should match the number of calls to net_rx_action(), but I can
double check if you want.
(I had a simple hack to enable/disable the optimizations with a hijacked sysctl)
Turn on/off them with
echo 1001 /proc/sys/net/core/netdev_max_backlog
<gather stats>
echo 1000 >/proc/sys/net/core/netdev_max_backlog
<gather stats>
diff --git a/net/core/dev.c b/net/core/dev.c
index 0c4b21291348d4558f036fb05842dab023f65dc3..f8c6fde6100c8e4812037bd070e11733409bd0a0
100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6653,7 +6653,7 @@ static __latent_entropy void
net_rx_action(struct softirq_action *h)
LIST_HEAD(repoll);
start:
- sd->in_net_rx_action = true;
+ sd->in_net_rx_action = (netdev_max_backlog & 1);
local_irq_disable();
list_splice_init(&sd->poll_list, &list);
local_irq_enable();
Powered by blists - more mailing lists