[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87iljoz83d.fsf@toke.dk>
Date: Wed, 09 Nov 2022 12:21:42 +0100
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: Stanislav Fomichev <sdf@...gle.com>, bpf@...r.kernel.org
Cc: ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
martin.lau@...ux.dev, song@...nel.org, yhs@...com,
john.fastabend@...il.com, kpsingh@...nel.org, sdf@...gle.com,
haoluo@...gle.com, jolsa@...nel.org,
David Ahern <dsahern@...il.com>,
Jakub Kicinski <kuba@...nel.org>,
Willem de Bruijn <willemb@...gle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Anatoly Burakov <anatoly.burakov@...el.com>,
Alexander Lobakin <alexandr.lobakin@...el.com>,
Magnus Karlsson <magnus.karlsson@...il.com>,
Maryam Tahhan <mtahhan@...hat.com>, xdp-hints@...-project.net,
netdev@...r.kernel.org
Subject: Re: [xdp-hints] [RFC bpf-next v2 04/14] veth: Support rx timestamp
metadata for xdp
Stanislav Fomichev <sdf@...gle.com> writes:
> xskxceiver conveniently setups up veth pairs so it seems logical
> to use veth as an example for some of the metadata handling.
>
> We timestamp skb right when we "receive" it, store its
> pointer in new veth_xdp_buff wrapper and generate BPF bytecode to
> reach it from the BPF program.
>
> This largely follows the idea of "store some queue context in
> the xdp_buff/xdp_frame so the metadata can be reached out
> from the BPF program".
>
> Cc: John Fastabend <john.fastabend@...il.com>
> Cc: David Ahern <dsahern@...il.com>
> Cc: Martin KaFai Lau <martin.lau@...ux.dev>
> Cc: Jakub Kicinski <kuba@...nel.org>
> Cc: Willem de Bruijn <willemb@...gle.com>
> Cc: Jesper Dangaard Brouer <brouer@...hat.com>
> Cc: Anatoly Burakov <anatoly.burakov@...el.com>
> Cc: Alexander Lobakin <alexandr.lobakin@...el.com>
> Cc: Magnus Karlsson <magnus.karlsson@...il.com>
> Cc: Maryam Tahhan <mtahhan@...hat.com>
> Cc: xdp-hints@...-project.net
> Cc: netdev@...r.kernel.org
> Signed-off-by: Stanislav Fomichev <sdf@...gle.com>
> ---
> drivers/net/veth.c | 31 +++++++++++++++++++++++++++++++
> 1 file changed, 31 insertions(+)
>
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index 917ba57453c1..0e629ceb087b 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -25,6 +25,7 @@
> #include <linux/filter.h>
> #include <linux/ptr_ring.h>
> #include <linux/bpf_trace.h>
> +#include <linux/bpf_patch.h>
> #include <linux/net_tstamp.h>
>
> #define DRV_NAME "veth"
> @@ -118,6 +119,7 @@ static struct {
>
> struct veth_xdp_buff {
> struct xdp_buff xdp;
> + struct sk_buff *skb;
> };
>
> static int veth_get_link_ksettings(struct net_device *dev,
> @@ -602,6 +604,7 @@ static struct xdp_frame *veth_xdp_rcv_one(struct veth_rq *rq,
>
> xdp_convert_frame_to_buff(frame, xdp);
> xdp->rxq = &rq->xdp_rxq;
> + vxbuf.skb = NULL;
>
> act = bpf_prog_run_xdp(xdp_prog, xdp);
>
> @@ -826,6 +829,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
>
> orig_data = xdp->data;
> orig_data_end = xdp->data_end;
> + vxbuf.skb = skb;
>
> act = bpf_prog_run_xdp(xdp_prog, xdp);
>
> @@ -942,6 +946,7 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget,
> struct sk_buff *skb = ptr;
>
> stats->xdp_bytes += skb->len;
> + __net_timestamp(skb);
> skb = veth_xdp_rcv_skb(rq, skb, bq, stats);
> if (skb) {
> if (skb_shared(skb) || skb_unclone(skb, GFP_ATOMIC))
> @@ -1665,6 +1670,31 @@ static int veth_xdp(struct net_device *dev, struct netdev_bpf *xdp)
> }
> }
>
> +static void veth_unroll_kfunc(const struct bpf_prog *prog, u32 func_id,
> + struct bpf_patch *patch)
> +{
> + if (func_id == xdp_metadata_kfunc_id(XDP_METADATA_KFUNC_RX_TIMESTAMP_SUPPORTED)) {
> + /* return true; */
> + bpf_patch_append(patch, BPF_MOV64_IMM(BPF_REG_0, 1));
> + } else if (func_id == xdp_metadata_kfunc_id(XDP_METADATA_KFUNC_RX_TIMESTAMP)) {
> + bpf_patch_append(patch,
> + /* r5 = ((struct veth_xdp_buff *)r1)->skb; */
> + BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_1,
> + offsetof(struct veth_xdp_buff, skb)),
> + /* if (r5 == NULL) { */
> + BPF_JMP_IMM(BPF_JNE, BPF_REG_5, 0, 2),
> + /* return 0; */
> + BPF_MOV64_IMM(BPF_REG_0, 0),
> + BPF_JMP_A(1),
> + /* } else { */
> + /* return ((struct sk_buff *)r5)->tstamp; */
> + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_5,
> + offsetof(struct sk_buff, tstamp)),
> + /* } */
I don't think it's realistic to expect driver developers to write this
level of BPF instructions for everything. With the 'patch' thing it
should be feasible to write some helpers that driver developers can use,
right? E.g., this one could be:
bpf_read_context_member_u64(size_t ctx_offset, size_t member_offset)
called as:
bpf_read_context_member_u64(offsetof(struct veth_xdp_buff, skb), offsetof(struct sk_buff, tstamp));
or with some macro trickery we could even hide the offsetof so you just
pass in types and member names?
-Toke
Powered by blists - more mailing lists