lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CAPhsuW4k8rVvcwkLUf1h-GXH2BVeCtxYJ2qjsuFifzanomsr6w@mail.gmail.com> Date: Fri, 1 Mar 2019 16:25:58 -0800 From: Song Liu <liu.song.a23@...il.com> To: brakmo <brakmo@...com> Cc: netdev <netdev@...r.kernel.org>, Martin Lau <kafai@...com>, Alexei Starovoitov <ast@...com>, Daniel Borkmann <daniel@...earbox.net>, Kernel Team <Kernel-team@...com> Subject: Re: [PATCH v3 bpf-next 1/5] bpf: add bpf helper bpf_skb_ecn_set_ce On Fri, Mar 1, 2019 at 12:39 PM brakmo <brakmo@...com> wrote: > > This patch adds a new bpf helper BPF_FUNC_skb_ecn_set_ce > "int bpf_skb_ecn_set_ce(struct sk_buff *skb)". It is added to > BPF_PROG_TYPE_CGROUP_SKB typed bpf_prog which currently can > be attached to the ingress and egress path. The helper is needed > because his type of bpf_prog cannot modify the skb directly. > > This helper is used to set the ECN field of ECN capable IP packets to ce > (congestion encountered) in the IPv6 or IPv4 header of the skb. It can be > used by a bpf_prog to manage egress or ingress network bandwdith limit > per cgroupv2 by inducing an ECN response in the TCP sender. > This works best when using DCTCP. > > Signed-off-by: Lawrence Brakmo <brakmo@...com> > Signed-off-by: Martin KaFai Lau <kafai@...com> Acked-by: Song Liu <songliubraving@...com> > --- > include/uapi/linux/bpf.h | 10 +++++++++- > net/core/filter.c | 28 ++++++++++++++++++++++++++++ > 2 files changed, 37 insertions(+), 1 deletion(-) > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h > index 2e308e90ffea..3c38ac9a92a7 100644 > --- a/include/uapi/linux/bpf.h > +++ b/include/uapi/linux/bpf.h > @@ -2359,6 +2359,13 @@ union bpf_attr { > * Return > * A **struct bpf_tcp_sock** pointer on success, or NULL in > * case of failure. > + * > + * int bpf_skb_ecn_set_ce(struct sk_buf *skb) > + * Description > + * Sets ECN of IP header to ce (congestion encountered) if > + * current value is ect (ECN capable). Works with IPv6 and IPv4. > + * Return > + * 1 if set, 0 if not set. > */ > #define __BPF_FUNC_MAPPER(FN) \ > FN(unspec), \ > @@ -2457,7 +2464,8 @@ union bpf_attr { > FN(spin_lock), \ > FN(spin_unlock), \ > FN(sk_fullsock), \ > - FN(tcp_sock), > + FN(tcp_sock), \ > + FN(skb_ecn_set_ce), > > /* integer value in 'imm' field of BPF_CALL instruction selects which helper > * function eBPF program intends to call > diff --git a/net/core/filter.c b/net/core/filter.c > index 85749f6ec789..558ca72f2254 100644 > --- a/net/core/filter.c > +++ b/net/core/filter.c > @@ -5426,6 +5426,32 @@ static const struct bpf_func_proto bpf_tcp_sock_proto = { > .arg1_type = ARG_PTR_TO_SOCK_COMMON, > }; > > +BPF_CALL_1(bpf_skb_ecn_set_ce, struct sk_buff *, skb) > +{ > + unsigned int iphdr_len; > + > + if (skb->protocol == cpu_to_be16(ETH_P_IP)) > + iphdr_len = sizeof(struct iphdr); > + else if (skb->protocol == cpu_to_be16(ETH_P_IPV6)) > + iphdr_len = sizeof(struct ipv6hdr); > + else > + return 0; > + > + if (skb_headlen(skb) < iphdr_len) > + return 0; > + > + if (skb_cloned(skb) && !skb_clone_writable(skb, iphdr_len)) > + return 0; > + > + return INET_ECN_set_ce(skb); > +} > + > +static const struct bpf_func_proto bpf_skb_ecn_set_ce_proto = { > + .func = bpf_skb_ecn_set_ce, > + .gpl_only = false, > + .ret_type = RET_INTEGER, > + .arg1_type = ARG_PTR_TO_CTX, > +}; > #endif /* CONFIG_INET */ > > bool bpf_helper_changes_pkt_data(void *func) > @@ -5585,6 +5611,8 @@ cg_skb_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) > #ifdef CONFIG_INET > case BPF_FUNC_tcp_sock: > return &bpf_tcp_sock_proto; > + case BPF_FUNC_skb_ecn_set_ce: > + return &bpf_skb_ecn_set_ce_proto; > #endif > default: > return sk_filter_func_proto(func_id, prog); > -- > 2.17.1 >
Powered by blists - more mailing lists