[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <602ac96f9e30f_3ed41208b6@john-XPS-13-9370.notmuch>
Date: Mon, 15 Feb 2021 11:20:15 -0800
From: John Fastabend <john.fastabend@...il.com>
To: Cong Wang <xiyou.wangcong@...il.com>, netdev@...r.kernel.org
Cc: bpf@...r.kernel.org, duanxiongchun@...edance.com,
wangdongdong.6@...edance.com, jiang.wang@...edance.com,
Cong Wang <cong.wang@...edance.com>,
John Fastabend <john.fastabend@...il.com>,
Daniel Borkmann <daniel@...earbox.net>,
Jakub Sitnicki <jakub@...udflare.com>,
Lorenz Bauer <lmb@...udflare.com>
Subject: RE: [Patch bpf-next v3 4/5] skmsg: use skb ext instead of TCP_SKB_CB
Cong Wang wrote:
> From: Cong Wang <cong.wang@...edance.com>
>
> Currently TCP_SKB_CB() is hard-coded in skmsg code, it certainly
> does not work for any other non-TCP protocols. We can move them to
> skb ext instead of playing with skb cb, which is harder to make
> correct.
>
> Cc: John Fastabend <john.fastabend@...il.com>
> Cc: Daniel Borkmann <daniel@...earbox.net>
> Cc: Jakub Sitnicki <jakub@...udflare.com>
> Reviewed-by: Lorenz Bauer <lmb@...udflare.com>
> Signed-off-by: Cong Wang <cong.wang@...edance.com>
> ---
I'm not seeing the advantage of doing this at the moment. We can
continue to use cb[] here, which is simpler IMO and use the ext
if needed for the other use cases. This is adding a per packet
alloc cost that we don't have at the moment as I understand it.
[...]
> diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
> index e3bb712af257..d5c711ef6d4b 100644
> --- a/include/linux/skmsg.h
> +++ b/include/linux/skmsg.h
> @@ -459,4 +459,44 @@ static inline bool sk_psock_strp_enabled(struct sk_psock *psock)
> return false;
> return !!psock->saved_data_ready;
> }
> +
> +struct skb_bpf_ext {
> + __u32 flags;
> + struct sock *sk_redir;
> +};
> +
> +#if IS_ENABLED(CONFIG_NET_SOCK_MSG)
> +static inline
> +bool skb_bpf_ext_ingress(const struct sk_buff *skb)
> +{
> + struct skb_bpf_ext *ext = skb_ext_find(skb, SKB_EXT_BPF);
> +
> + return ext->flags & BPF_F_INGRESS;
> +}
> +
> +static inline
> +void skb_bpf_ext_set_ingress(const struct sk_buff *skb)
> +{
> + struct skb_bpf_ext *ext = skb_ext_find(skb, SKB_EXT_BPF);
> +
> + ext->flags |= BPF_F_INGRESS;
> +}
> +
> +static inline
> +struct sock *skb_bpf_ext_redirect_fetch(struct sk_buff *skb)
> +{
> + struct skb_bpf_ext *ext = skb_ext_find(skb, SKB_EXT_BPF);
> +
> + return ext->sk_redir;
> +}
> +
> +static inline
> +void skb_bpf_ext_redirect_clear(struct sk_buff *skb)
> +{
> + struct skb_bpf_ext *ext = skb_ext_find(skb, SKB_EXT_BPF);
> +
> + ext->flags = 0;
> + ext->sk_redir = NULL;
> +}
> +#endif /* CONFIG_NET_SOCK_MSG */
So we will have some slight duplication for cb[] variant and ext
variant above. I'm OK with that to avoid an allocation.
[...]
> @@ -1003,11 +1008,17 @@ static int sk_psock_verdict_recv(read_descriptor_t *desc, struct sk_buff *skb,
> goto out;
> }
> skb_set_owner_r(skb, sk);
> + if (!skb_ext_add(skb, SKB_EXT_BPF)) {
> + len = 0;
> + kfree_skb(skb);
> + goto out;
> + }
> +
per packet cost here. Perhaps you can argue small alloc will usually not be
noticable in such a large stack, but once we convert over it will be very
hard to go back. And I'm looking at optimizing this path now.
> prog = READ_ONCE(psock->progs.skb_verdict);
> if (likely(prog)) {
> - tcp_skb_bpf_redirect_clear(skb);
> + skb_bpf_ext_redirect_clear(skb);
> ret = sk_psock_bpf_run(psock, prog, skb);
> - ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb));
> + ret = sk_psock_map_verd(ret, skb_bpf_ext_redirect_fetch(skb));
> }
> sk_psock_verdict_apply(psock, skb, ret);
Thanks for the series Cong. Drop this patch and resubmit carry ACKs forward
and then lets revisit this later.
Thanks,
John
Powered by blists - more mailing lists