[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <bb627106428ea3223610f5623142c24270f0e14e.1618330734.git.lorenzo@kernel.org>
Date: Tue, 13 Apr 2021 18:22:02 +0200
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: bpf@...r.kernel.org
Cc: netdev@...r.kernel.org, lorenzo.bianconi@...hat.com,
davem@...emloft.net, kuba@...nel.org, ast@...nel.org,
daniel@...earbox.net, brouer@...hat.com, song@...nel.org
Subject: [PATCH v2 bpf-next] cpumap: bulk skb using netif_receive_skb_list
Rely on netif_receive_skb_list routine to send skbs converted from
xdp_frames in cpu_map_kthread_run in order to improve i-cache usage.
The proposed patch has been tested running xdp_redirect_cpu bpf sample
available in the kernel tree that is used to redirect UDP frames from
ixgbe driver to a cpumap entry and then to the networking stack.
UDP frames are generated using pkt_gen.
$xdp_redirect_cpu --cpu <cpu> --progname xdp_cpu_map0 --dev <eth>
bpf-next: ~2.2Mpps
bpf-next + cpumap skb-list: ~3.15Mpps
Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
---
Changes since v1:
- fixed comment
- rebased on top of bpf-next tree
---
kernel/bpf/cpumap.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index 0cf2791d5099..d89551a508b2 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -27,7 +27,7 @@
#include <linux/capability.h>
#include <trace/events/xdp.h>
-#include <linux/netdevice.h> /* netif_receive_skb_core */
+#include <linux/netdevice.h> /* netif_receive_skb_list */
#include <linux/etherdevice.h> /* eth_type_trans */
/* General idea: XDP packets getting XDP redirected to another CPU,
@@ -257,6 +257,7 @@ static int cpu_map_kthread_run(void *data)
void *frames[CPUMAP_BATCH];
void *skbs[CPUMAP_BATCH];
int i, n, m, nframes;
+ LIST_HEAD(list);
/* Release CPU reschedule checks */
if (__ptr_ring_empty(rcpu->queue)) {
@@ -305,7 +306,6 @@ static int cpu_map_kthread_run(void *data)
for (i = 0; i < nframes; i++) {
struct xdp_frame *xdpf = frames[i];
struct sk_buff *skb = skbs[i];
- int ret;
skb = __xdp_build_skb_from_frame(xdpf, skb,
xdpf->dev_rx);
@@ -314,11 +314,10 @@ static int cpu_map_kthread_run(void *data)
continue;
}
- /* Inject into network stack */
- ret = netif_receive_skb_core(skb);
- if (ret == NET_RX_DROP)
- drops++;
+ list_add_tail(&skb->list, &list);
}
+ netif_receive_skb_list(&list);
+
/* Feedback loop via tracepoint */
trace_xdp_cpumap_kthread(rcpu->map_id, n, drops, sched, &stats);
--
2.30.2
Powered by blists - more mailing lists