lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 Jan 2018 20:54:00 -0800
From:   John Fastabend <john.fastabend@...il.com>
To:     Jesper Dangaard Brouer <brouer@...hat.com>,
        Daniel Borkmann <borkmann@...earbox.net>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     netdev@...r.kernel.org, dsahern@...il.com, gospo@...adcom.com,
        bjorn.topel@...el.com, michael.chan@...adcom.com
Subject: Re: [bpf-next V4 PATCH 01/14] xdp: base API for new XDP rx-queue info
 concept

On 01/03/2018 02:25 AM, Jesper Dangaard Brouer wrote:
> This patch only introduce the core data structures and API functions.
> All XDP enabled drivers must use the API before this info can used.
> 
> There is a need for XDP to know more about the RX-queue a given XDP
> frames have arrived on.  For both the XDP bpf-prog and kernel side.
> 
> Instead of extending xdp_buff each time new info is needed, the patch
> creates a separate read-mostly struct xdp_rxq_info, that contains this
> info.  We stress this data/cache-line is for read-only info.  This is
> NOT for dynamic per packet info, use the data_meta for such use-cases.
> 
> The performance advantage is this info can be setup at RX-ring init
> time, instead of updating N-members in xdp_buff.  A possible (driver
> level) micro optimization is that xdp_buff->rxq assignment could be
> done once per XDP/NAPI loop.  The extra pointer deref only happens for
> program needing access to this info (thus, no slowdown to existing
> use-cases).
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
> ---
>  include/linux/filter.h |    2 +
>  include/net/xdp.h      |   47 ++++++++++++++++++++++++++++++++++
>  net/core/Makefile      |    2 +
>  net/core/xdp.c         |   67 ++++++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 117 insertions(+), 1 deletion(-)
>  create mode 100644 include/net/xdp.h
>  create mode 100644 net/core/xdp.c
> 
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index 2b0df2703671..425056c7f96c 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -20,6 +20,7 @@
>  #include <linux/set_memory.h>
>  #include <linux/kallsyms.h>
>  
> +#include <net/xdp.h>
>  #include <net/sch_generic.h>

Perhaps just 'struct xdp_rxq_info' is need here instead of
the full include. At least that is the pattern used for sk_buff
and sock. (by the way sorry for the late v4 feedback)
 
>  
>  #include <uapi/linux/filter.h>
> @@ -503,6 +504,7 @@ struct xdp_buff {
>  	void *data_end;
>  	void *data_meta;
>  	void *data_hard_start;
> +	struct xdp_rxq_info *rxq;
>  };
>  
>  /* Compute the linear packet data range [data, data_end) which
> diff --git a/include/net/xdp.h b/include/net/xdp.h
> new file mode 100644
> index 000000000000..86c41631a908
> --- /dev/null
> +++ b/include/net/xdp.h 

[...]

> +
> +/* Returns 0 on success, negative on failure */
> +int xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq,
> +		     struct net_device *dev, u32 queue_index)
> +{
> +	if (xdp_rxq->reg_state == REG_STATE_UNUSED) {
> +		WARN(1, "Driver promised not to register this");
> +		return -EINVAL;
> +	}
> +
> +	if (xdp_rxq->reg_state == REG_STATE_REGISTERED) {
> +		WARN(1, "Missing unregister, handled but fix driver");
> +		xdp_rxq_info_unreg(xdp_rxq);
> +	}
> +
> +	if (!dev) {

Seems a bit paranoid, driver passing a NULL dev would be
badly broken. And probably not important but could make
above tests unlikely().

> +		WARN(1, "Missing net_device from driver");
> +		return -ENODEV;
> +	}
> +
> +	/* State either UNREGISTERED or NEW */
> +	xdp_rxq_info_init(xdp_rxq);
> +	xdp_rxq->dev = dev;
> +	xdp_rxq->queue_index = queue_index;
> +
> +	xdp_rxq->reg_state = REG_STATE_REGISTERED;
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(xdp_rxq_info_reg);
> +
> +void xdp_rxq_info_unused(struct xdp_rxq_info *xdp_rxq)
> +{
> +	xdp_rxq->reg_state = REG_STATE_UNUSED;
> +}
> +EXPORT_SYMBOL_GPL(xdp_rxq_info_unused);
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ