lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 26 Nov 2020 11:51:19 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Hangbin Liu <liuhangbin@...il.com>
Cc:     bpf@...r.kernel.org, netdev@...r.kernel.org,
        Daniel Borkmann <daniel@...earbox.net>,
        John Fastabend <john.fastabend@...il.com>,
        Toke Høiland-Jørgensen <toke@...hat.com>,
        Tariq Toukan <tariqt@...lanox.com>,
        Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
        brouer@...hat.com
Subject: Re: [PATCHv2 bpf-next] samples/bpf: add xdp program on egress for
 xdp_redirect_map

On Thu, 26 Nov 2020 16:43:25 +0800
Hangbin Liu <liuhangbin@...il.com> wrote:

> Current sample test xdp_redirect_map only count pkts on ingress. But we
> can't know whether the pkts are redirected or dropped. So add a counter
> on egress interface so we could know how many pkts are redirect in fact.

This is not true.

The 2nd devmap XDP-prog will run in the same RX-context, so it doesn't
tell us if the redirect was successful.  I looked up the code, and the
2nd XDP-prog is even allowed to run when the egress driver doesn't
support the NDO to xmit (dev->netdev_ops->ndo_xdp_xmit), which is very
misleading, if you place a output counter here.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


static inline int __xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp,
			       struct net_device *dev_rx)
{
	struct xdp_frame *xdpf;
	int err;

	if (!dev->netdev_ops->ndo_xdp_xmit)
		return -EOPNOTSUPP;

	err = xdp_ok_fwd_dev(dev, xdp->data_end - xdp->data);
	if (unlikely(err))
		return err;

	xdpf = xdp_convert_buff_to_frame(xdp);
	if (unlikely(!xdpf))
		return -EOVERFLOW;

	bq_enqueue(dev, xdpf, dev_rx);
	return 0;
}

static struct xdp_buff *dev_map_run_prog(struct net_device *dev,
					 struct xdp_buff *xdp,
					 struct bpf_prog *xdp_prog)
{
	struct xdp_txq_info txq = { .dev = dev };
	u32 act;

	xdp_set_data_meta_invalid(xdp);
	xdp->txq = &txq;

	act = bpf_prog_run_xdp(xdp_prog, xdp);
	switch (act) {
	case XDP_PASS:
		return xdp;
	case XDP_DROP:
		break;
	default:
		bpf_warn_invalid_xdp_action(act);
		fallthrough;
	case XDP_ABORTED:
		trace_xdp_exception(dev, xdp_prog, act);
		break;
	}

	xdp_return_buff(xdp);
	return NULL;
}

int dev_xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp,
		    struct net_device *dev_rx)
{
	return __xdp_enqueue(dev, xdp, dev_rx);
}

int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp,
		    struct net_device *dev_rx)
{
	struct net_device *dev = dst->dev;

	if (dst->xdp_prog) {
		xdp = dev_map_run_prog(dev, xdp, dst->xdp_prog);
		if (!xdp)
			return 0;
	}
	return __xdp_enqueue(dev, xdp, dev_rx);
}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ