[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87v8e8xsih.fsf@cloudflare.com>
Date: Tue, 25 Jul 2023 11:08:15 +0200
From: Jakub Sitnicki <jakub@...udflare.com>
To: Yan Zhai <yan@...udflare.com>
Cc: bpf@...r.kernel.org, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Song Liu <song@...nel.org>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...gle.com>,
Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Mykola Lysenko <mykolal@...com>, Shuah Khan <shuah@...nel.org>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
linux-kselftest@...r.kernel.org,
Jordan Griege <jgriege@...udflare.com>,
kernel-team@...udflare.com
Subject: Re: [PATCH v3 bpf 1/2] bpf: fix skb_do_redirect return values
On Mon, Jul 24, 2023 at 09:13 PM -07, Yan Zhai wrote:
> skb_do_redirect returns various of values: error code (negative), 0
> (success), and some positive status code, e.g. NET_XMIT_CN, NET_RX_DROP.
> Such code are not handled at lwt xmit hook in function ip_finish_output2
> and ip6_finish_output, which can cause unexpected problems. This change
> converts the positive status code to proper error code.
>
> Suggested-by: Stanislav Fomichev <sdf@...gle.com>
> Reported-by: Jordan Griege <jgriege@...udflare.com>
> Signed-off-by: Yan Zhai <yan@...udflare.com>
>
> ---
> v3: converts also RX side return value in addition to TX values
> v2: code style change suggested by Stanislav Fomichev
> ---
> net/core/filter.c | 12 +++++++++++-
> 1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 06ba0e56e369..3e232ce11ca0 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -2095,7 +2095,12 @@ static const struct bpf_func_proto bpf_csum_level_proto = {
>
> static inline int __bpf_rx_skb(struct net_device *dev, struct sk_buff *skb)
> {
> - return dev_forward_skb_nomtu(dev, skb);
> + int ret = dev_forward_skb_nomtu(dev, skb);
> +
> + if (unlikely(ret > 0))
> + return -ENETDOWN;
> +
> + return 0;
> }
>
> static inline int __bpf_rx_skb_no_mac(struct net_device *dev,
> @@ -2106,6 +2111,8 @@ static inline int __bpf_rx_skb_no_mac(struct net_device *dev,
> if (likely(!ret)) {
> skb->dev = dev;
> ret = netif_rx(skb);
> + } else if (ret > 0) {
> + return -ENETDOWN;
> }
>
> return ret;
> @@ -2129,6 +2136,9 @@ static inline int __bpf_tx_skb(struct net_device *dev, struct sk_buff *skb)
> ret = dev_queue_xmit(skb);
> dev_xmit_recursion_dec();
>
> + if (unlikely(ret > 0))
> + ret = net_xmit_errno(ret);
> +
> return ret;
> }
net_xmit_errno maps NET_XMIT_DROP to -ENOBUFS. It would make sense to me
to map NET_RX_DROP to -ENOBUFS as well, instead of -ENETDOWN, to be
consistent.
It looks like the Fixes tag for this should point to the change that
introduced BPF for LWT:
Fixes: 3a0af8fd61f9 ("bpf: BPF for lightweight tunnel infrastructure")
Powered by blists - more mailing lists