[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d3a041f3-5fba-a114-1796-492b68a8c011@redhat.com>
Date: Wed, 25 Jan 2023 11:35:57 +0100
From: Jesper Dangaard Brouer <jbrouer@...hat.com>
To: Lorenzo Bianconi <lorenzo@...nel.org>, bpf@...r.kernel.org
Cc: brouer@...hat.com, netdev@...r.kernel.org, ast@...nel.org,
daniel@...earbox.net, andrii@...nel.org, davem@...emloft.net,
kuba@...nel.org, hawk@...nel.org, pabeni@...hat.com,
edumazet@...gle.com, toke@...hat.com, memxor@...il.com,
alardam@...il.com, saeedm@...dia.com, anthony.l.nguyen@...el.com,
gospo@...adcom.com, vladimir.oltean@....com, nbd@....name,
john@...ozen.org, leon@...nel.org, simon.horman@...igine.com,
aelior@...vell.com, christophe.jaillet@...adoo.fr,
ecree.xilinx@...il.com, mst@...hat.com, bjorn@...nel.org,
magnus.karlsson@...el.com, maciej.fijalkowski@...el.com,
intel-wired-lan@...ts.osuosl.org, lorenzo.bianconi@...hat.com,
martin.lau@...ux.dev
Subject: Re: [PATCH v2 bpf-next 6/8] bpf: devmap: check XDP features in
__xdp_enqueue routine
On 25/01/2023 01.33, Lorenzo Bianconi wrote:
> Check if the destination device implements ndo_xdp_xmit callback relying
> on NETDEV_XDP_ACT_NDO_XMIT flags. Moreover, check if the destination device
> supports XDP non-linear frame in __xdp_enqueue and is_valid_dst routines.
> This patch allows to perform XDP_REDIRECT on non-linear XDP buffers.
>
> Co-developed-by: Kumar Kartikeya Dwivedi <memxor@...il.com>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@...il.com>
> Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> ---
> kernel/bpf/devmap.c | 16 +++++++++++++---
> net/core/filter.c | 13 +++++--------
> 2 files changed, 18 insertions(+), 11 deletions(-)
>
LGTM
Acked-by: Jesper Dangaard Brouer <brouer@...hat.com>
> diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
> index d01e4c55b376..2675fefc6cb6 100644
> --- a/kernel/bpf/devmap.c
> +++ b/kernel/bpf/devmap.c
> @@ -474,7 +474,11 @@ static inline int __xdp_enqueue(struct net_device *dev, struct xdp_frame *xdpf,
> {
> int err;
>
> - if (!dev->netdev_ops->ndo_xdp_xmit)
> + if (!(dev->xdp_features & NETDEV_XDP_ACT_NDO_XMIT))
> + return -EOPNOTSUPP;
Good: dev->netdev_ops and dev->xdp_features are on same cacheline.
This means dev->netdev_ops will be hot, once we need to deref
netdev_ops->ndo_xdp_xmit, which only happens as part of bulking towards
driver.
> +
> + if (unlikely(!(dev->xdp_features & NETDEV_XDP_ACT_NDO_XMIT_SG) &&
> + xdp_frame_has_frags(xdpf)))
Good: xdp_frame_has_frags() look at xdpf->flags and avoids deref of
shared_info area.
> return -EOPNOTSUPP;
>
> err = xdp_ok_fwd_dev(dev, xdp_get_frame_len(xdpf));
> @@ -532,8 +536,14 @@ int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_frame *xdpf,
>
> static bool is_valid_dst(struct bpf_dtab_netdev *obj, struct xdp_frame *xdpf)
> {
> - if (!obj ||
> - !obj->dev->netdev_ops->ndo_xdp_xmit)
> + if (!obj)
> + return false;
> +
> + if (!(obj->dev->xdp_features & NETDEV_XDP_ACT_NDO_XMIT))
> + return false;
> +
> + if (unlikely(!(obj->dev->xdp_features & NETDEV_XDP_ACT_NDO_XMIT_SG) &&
> + xdp_frame_has_frags(xdpf)))
> return false;
>
> if (xdp_ok_fwd_dev(obj->dev, xdp_get_frame_len(xdpf)))
> diff --git a/net/core/filter.c b/net/core/filter.c
> index ed08dbf10338..aeebe21a7eff 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -4314,16 +4314,13 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp,
> struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
> enum bpf_map_type map_type = ri->map_type;
>
> - /* XDP_REDIRECT is not fully supported yet for xdp frags since
> - * not all XDP capable drivers can map non-linear xdp_frame in
> - * ndo_xdp_xmit.
> - */
> - if (unlikely(xdp_buff_has_frags(xdp) &&
> - map_type != BPF_MAP_TYPE_CPUMAP))
> - return -EOPNOTSUPP;
Nice to see this limitation being lifted :-)
> + if (map_type == BPF_MAP_TYPE_XSKMAP) {
> + /* XDP_REDIRECT is not supported AF_XDP yet. */
> + if (unlikely(xdp_buff_has_frags(xdp)))
> + return -EOPNOTSUPP;
>
> - if (map_type == BPF_MAP_TYPE_XSKMAP)
> return __xdp_do_redirect_xsk(ri, dev, xdp, xdp_prog);
> + }
>
> return __xdp_do_redirect_frame(ri, dev, xdp_convert_buff_to_frame(xdp),
> xdp_prog);
Powered by blists - more mailing lists