[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200717120013.0926a74e@toad>
Date: Fri, 17 Jul 2020 12:00:13 +0200
From: Jakub Sitnicki <jakub@...udflare.com>
To: Lorenzo Bianconi <lorenzo@...nel.org>
Cc: netdev@...r.kernel.org, davem@...emloft.net, ast@...nel.org,
brouer@...hat.com, daniel@...earbox.net, toke@...hat.com,
lorenzo.bianconi@...hat.com, dsahern@...nel.org,
andrii.nakryiko@...il.com, bpf@...r.kernel.org
Subject: Re: [PATCH v7 bpf-next 0/9] introduce support for XDP programs in
CPUMAP
On Tue, 14 Jul 2020 15:56:33 +0200
Lorenzo Bianconi <lorenzo@...nel.org> wrote:
> Similar to what David Ahern proposed in [1] for DEVMAPs, introduce the
> capability to attach and run a XDP program to CPUMAP entries.
> The idea behind this feature is to add the possibility to define on which CPU
> run the eBPF program if the underlying hw does not support RSS.
> I respin patch 1/6 from a previous series sent by David [2].
> The functionality has been tested on Marvell Espressobin, i40e and mlx5.
> Detailed tests results can be found here:
> https://github.com/xdp-project/xdp-project/blob/master/areas/cpumap/cpumap04-map-xdp-prog.org
>
> Changes since v6:
> - rebase on top of bpf-next
> - move bpf_cpumap_val and bpf_prog in the first bpf_cpu_map_entry cache-line
>
> Changes since v5:
> - move bpf_prog_put() in put_cpu_map_entry()
> - remove READ_ONCE(rcpu->prog) in cpu_map_bpf_prog_run_xdp
> - rely on bpf_prog_get_type() instead of bpf_prog_get_type_dev() in
> __cpu_map_load_bpf_program()
>
> Changes since v4:
> - move xdp_clear_return_frame_no_direct inside rcu section
> - update David Ahern's email address
>
> Changes since v3:
> - fix typo in commit message
> - fix access to ctx->ingress_ifindex in cpumap bpf selftest
>
> Changes since v2:
> - improved comments
> - fix return value in xdp_convert_buff_to_frame
> - added patch 1/9: "cpumap: use non-locked version __ptr_ring_consume_batched"
> - do not run kmem_cache_alloc_bulk if all frames have been consumed by the XDP
> program attached to the CPUMAP entry
> - removed bpf_trace_printk in kselftest
>
> Changes since v1:
> - added performance test results
> - added kselftest support
> - fixed memory accounting with page_pool
> - extended xdp_redirect_cpu_user.c to load an external program to perform
> redirect
> - reported ifindex to attached eBPF program
> - moved bpf_cpumap_val definition to include/uapi/linux/bpf.h
>
> [1] https://patchwork.ozlabs.org/project/netdev/cover/20200529220716.75383-1-dsahern@kernel.org/
> [2] https://patchwork.ozlabs.org/project/netdev/patch/20200513014607.40418-2-dsahern@kernel.org/
>
> David Ahern (1):
> net: refactor xdp_convert_buff_to_frame
>
> Jesper Dangaard Brouer (1):
> cpumap: use non-locked version __ptr_ring_consume_batched
>
> Lorenzo Bianconi (7):
> samples/bpf: xdp_redirect_cpu_user: do not update bpf maps in option
> loop
> cpumap: formalize map value as a named struct
> bpf: cpumap: add the possibility to attach an eBPF program to cpumap
> bpf: cpumap: implement XDP_REDIRECT for eBPF programs attached to map
> entries
> libbpf: add SEC name for xdp programs attached to CPUMAP
> samples/bpf: xdp_redirect_cpu: load a eBPF program on cpumap
> selftest: add tests for XDP programs in CPUMAP entries
>
> include/linux/bpf.h | 6 +
> include/net/xdp.h | 41 ++--
> include/trace/events/xdp.h | 16 +-
> include/uapi/linux/bpf.h | 14 ++
> kernel/bpf/cpumap.c | 162 +++++++++++---
> net/core/dev.c | 9 +
> samples/bpf/xdp_redirect_cpu_kern.c | 25 ++-
> samples/bpf/xdp_redirect_cpu_user.c | 209 ++++++++++++++++--
> tools/include/uapi/linux/bpf.h | 14 ++
> tools/lib/bpf/libbpf.c | 2 +
> .../bpf/prog_tests/xdp_cpumap_attach.c | 70 ++++++
> .../bpf/progs/test_xdp_with_cpumap_helpers.c | 36 +++
> 12 files changed, 531 insertions(+), 73 deletions(-)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c
> create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_helpers.c
>
This started showing up with when running ./test_progs from recent
bpf-next (bfdfa51702de). Any chance it is related?
[ 2950.440613] =============================================
[ 3073.281578] INFO: task cpumap/0/map:26:536 blocked for more than 860 seconds.
[ 3073.285492] Tainted: G W 5.8.0-rc4-01471-g15d51f3a516b #814
[ 3073.289177] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3073.293021] cpumap/0/map:26 D 0 536 2 0x00004000
[ 3073.295755] Call Trace:
[ 3073.297143] __schedule+0x5ad/0xf10
[ 3073.299032] ? pci_mmcfg_check_reserved+0xd0/0xd0
[ 3073.301416] ? static_obj+0x31/0x80
[ 3073.303277] ? mark_held_locks+0x24/0x90
[ 3073.305313] ? cpu_map_update_elem+0x6d0/0x6d0
[ 3073.307544] schedule+0x6f/0x160
[ 3073.309282] schedule_preempt_disabled+0x14/0x20
[ 3073.311593] kthread+0x175/0x240
[ 3073.313299] ? kthread_create_on_node+0xd0/0xd0
[ 3073.315106] ret_from_fork+0x1f/0x30
[ 3073.316365]
Showing all locks held in the system:
[ 3073.318423] 1 lock held by khungtaskd/33:
[ 3073.319642] #0: ffffffff82d246a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x28/0x1c3
[ 3073.322249] =============================================
Powered by blists - more mailing lists