[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ac8d2c20-f0fc-725c-a0a9-bee0b1620af1@gmail.com>
Date: Mon, 8 Oct 2018 08:31:50 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Björn Töpel <bjorn.topel@...il.com>,
magnus.karlsson@...el.com, alexander.h.duyck@...el.com,
alexander.duyck@...il.com, john.fastabend@...il.com, ast@...com,
brouer@...hat.com, willemdebruijn.kernel@...il.com,
daniel@...earbox.net, mst@...hat.com, netdev@...r.kernel.org
Cc: Björn Töpel <bjorn.topel@...el.com>,
michael.lundkvist@...csson.com, jesse.brandeburg@...el.com,
anjali.singhai@...el.com, qi.z.zhang@...el.com
Subject: Re: [PATCH bpf-next v3 07/15] bpf: introduce new bpf AF_XDP map type
BPF_MAP_TYPE_XSKMAP
On 05/02/2018 04:01 AM, Björn Töpel wrote:
> From: Björn Töpel <bjorn.topel@...el.com>
>
> The xskmap is yet another BPF map, very much inspired by
> dev/cpu/sockmap, and is a holder of AF_XDP sockets. A user application
> adds AF_XDP sockets into the map, and by using the bpf_redirect_map
> helper, an XDP program can redirect XDP frames to an AF_XDP socket.
>
> Note that a socket that is bound to certain ifindex/queue index will
> *only* accept XDP frames from that netdev/queue index. If an XDP
> program tries to redirect from a netdev/queue index other than what
> the socket is bound to, the frame will not be received on the socket.
>
> A socket can reside in multiple maps.
>
> v3: Fixed race and simplified code.
> v2: Removed one indirection in map lookup.
>
> Signed-off-by: Björn Töpel <bjorn.topel@...el.com>
> ---
> include/linux/bpf.h | 25 +++++
> include/linux/bpf_types.h | 3 +
> include/net/xdp_sock.h | 7 ++
> include/uapi/linux/bpf.h | 1 +
> kernel/bpf/Makefile | 3 +
> kernel/bpf/verifier.c | 8 +-
> kernel/bpf/xskmap.c | 239 ++++++++++++++++++++++++++++++++++++++++++++++
> net/xdp/xsk.c | 5 +
> 8 files changed, 289 insertions(+), 2 deletions(-)
> create mode 100644 kernel/bpf/xskmap.c
>
This function is called under rcu_read_lock() , from map_update_elem()
> +
> +static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value,
> + u64 map_flags)
> +{
> + struct xsk_map *m = container_of(map, struct xsk_map, map);
> + u32 i = *(u32 *)key, fd = *(u32 *)value;
> + struct xdp_sock *xs, *old_xs;
> + struct socket *sock;
> + int err;
> +
> + if (unlikely(map_flags > BPF_EXIST))
> + return -EINVAL;
> + if (unlikely(i >= m->map.max_entries))
> + return -E2BIG;
> + if (unlikely(map_flags == BPF_NOEXIST))
> + return -EEXIST;
> +
> + sock = sockfd_lookup(fd, &err);
> + if (!sock)
> + return err;
> +
> + if (sock->sk->sk_family != PF_XDP) {
> + sockfd_put(sock);
> + return -EOPNOTSUPP;
> + }
> +
> + xs = (struct xdp_sock *)sock->sk;
> +
> + if (!xsk_is_setup_for_bpf_map(xs)) {
> + sockfd_put(sock);
> + return -EOPNOTSUPP;
> + }
> +
> + sock_hold(sock->sk);
> +
> + old_xs = xchg(&m->xsk_map[i], xs);
> + if (old_xs) {
> + /* Make sure we've flushed everything. */
So it is illegal to call synchronize_net(), since it is a reschedule point.
> + synchronize_net();
> + sock_put((struct sock *)old_xs);
> + }
> +
> + sockfd_put(sock);
> + return 0;
> +}
>
Powered by blists - more mailing lists