lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <103076c8-12df-7ef6-2660-280301754e01@intel.com>
Date:   Thu, 23 Feb 2023 16:22:19 +0100
From:   Alexander Lobakin <aleksander.lobakin@...el.com>
To:     William Tu <u9012063@...il.com>
CC:     <netdev@...r.kernel.org>, <jsankararama@...are.com>,
        <gyang@...are.com>, <doshir@...are.com>,
        <alexander.duyck@...il.com>, <alexandr.lobakin@...el.com>,
        <bang@...are.com>, <maciej.fijalkowski@...el.com>,
        <witu@...dia.com>, Yifeng Sun <yifengs@...are.com>,
        Alexander Duyck <alexanderduyck@...com>
Subject: Re: [RFC PATCH net-next v17] vmxnet3: Add XDP support.

From: William Tu <u9012063@...il.com>
Date: Mon, 20 Feb 2023 20:35:47 -0800

> The patch adds native-mode XDP support: XDP DROP, PASS, TX, and REDIRECT.

[...]

> +static int
> +vmxnet3_xdp_set(struct net_device *netdev, struct netdev_bpf *bpf,
> +		struct netlink_ext_ack *extack)
> +{
> +	struct vmxnet3_adapter *adapter = netdev_priv(netdev);
> +	struct bpf_prog *new_bpf_prog = bpf->prog;
> +	struct bpf_prog *old_bpf_prog;
> +	bool need_update;
> +	bool running;
> +	int err;
> +
> +	if (new_bpf_prog && netdev->mtu > VMXNET3_XDP_MAX_MTU) {
> +		NL_SET_ERR_MSG_MOD(extack, "MTU too large for XDP");

Minor: we now have NL_SET_ERR_MSG_FMT_MOD(), so you could even print to
user what the maximum MTU you support for XDP is.

> +		return -EOPNOTSUPP;
> +	}
> +
> +	if (adapter->netdev->features & NETIF_F_LRO) {
> +		NL_SET_ERR_MSG_MOD(extack, "LRO is not supported with XDP");
> +		adapter->netdev->features &= ~NETIF_F_LRO;
> +	}
> +
> +	old_bpf_prog = rcu_dereference(adapter->xdp_bpf_prog);
> +	if (!new_bpf_prog && !old_bpf_prog)
> +		return 0;
> +
> +	running = netif_running(netdev);
> +	need_update = !!old_bpf_prog != !!new_bpf_prog;
> +
> +	if (running && need_update)
> +		vmxnet3_quiesce_dev(adapter);

[...]

> +		bpf_warn_invalid_xdp_action(rq->adapter->netdev, prog, act);
> +		fallthrough;
> +	case XDP_ABORTED:
> +		trace_xdp_exception(rq->adapter->netdev, prog, act);
> +		rq->stats.xdp_aborted++;
> +		break;
> +	case XDP_DROP:
> +		rq->stats.xdp_drops++;
> +		break;
> +	}
> +
> +	page_pool_recycle_direct(rq->page_pool, page);

You can speed up stuff a bit here. recycle_direct() takes ->max_len to
sync DMA for device when recycling. You can use page_pool_put_page() and
specify the actual length which needs to be synced. This is a bit
tricky, but some systems have incredibly expensive DMA synchronization
and every byte counts there.
"Tricky" because you can't specify the original frame size here and ATST
can't specify the current xdp.data_end - xdp.data. As xdp.data may grow
to both left and right, the same with .data_end. So what you basically
need is the following:

sync_len = max(orig_len,
	       xdp.data_end - xdp.data_hard_start - page_pool->p.offset)

Because we don't care about the data between data_hard_start and
p.offset -- hardware doesn't touch it. But we care about the whole area
that might've been touched to the right of it.

Anyway, up to you. On server x86_64 platforms DMA sync is usually a noop.

> +
> +	return act;
> +}

[...]

> +static inline bool vmxnet3_xdp_enabled(struct vmxnet3_adapter *adapter)
> +{
> +	return !!rcu_access_pointer(adapter->xdp_bpf_prog);
> +}
> +
> +#endif

I feel good with the rest of the patch, thanks! Glad to see all the
feedback addressed when applicable.

Olek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ