lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20231215023707.41864-1-kuniyu@amazon.com> Date: Fri, 15 Dec 2023 11:37:07 +0900 From: Kuniyuki Iwashima <kuniyu@...zon.com> To: Matthieu Baerts <matttbe@...nel.org>, Mat Martineau <martineau@...nel.org>, Paolo Abeni <pabeni@...hat.com> CC: <edumazet@...gle.com>, <andrii@...nel.org>, <ast@...nel.org>, <bpf@...r.kernel.org>, <daniel@...earbox.net>, <kuni1840@...il.com>, <kuniyu@...zon.com>, <martin.lau@...ux.dev>, <netdev@...r.kernel.org> Subject: Re: [PATCH v6 bpf-next 3/6] bpf: tcp: Handle BPF SYN Cookie in skb_steal_sock(). From: Eric Dumazet <edumazet@...gle.com> Date: Thu, 14 Dec 2023 17:31:15 +0100 > On Thu, Dec 14, 2023 at 4:56 PM Kuniyuki Iwashima <kuniyu@...zon.com> wrote: > > > > We will support arbitrary SYN Cookie with BPF. > > > > If BPF prog validates ACK and kfunc allocates a reqsk, it will > > be carried to TCP stack as skb->sk with req->syncookie 1. Also, > > the reqsk has its listener as req->rsk_listener with no refcnt > > taken. > > > > When the TCP stack looks up a socket from the skb, we steal > > inet_reqsk(skb->sk)->rsk_listener in skb_steal_sock() so that > > the skb will be processed in cookie_v[46]_check() with the > > listener. > > > > Note that we do not clear skb->sk and skb->destructor so that we > > can carry the reqsk to cookie_v[46]_check(). > > > > Signed-off-by: Kuniyuki Iwashima <kuniyu@...zon.com> > > --- > > include/net/request_sock.h | 15 +++++++++++++-- > > 1 file changed, 13 insertions(+), 2 deletions(-) > > > > diff --git a/include/net/request_sock.h b/include/net/request_sock.h > > index 26c630c40abb..8839133d6f6b 100644 > > --- a/include/net/request_sock.h > > +++ b/include/net/request_sock.h > > @@ -101,10 +101,21 @@ static inline struct sock *skb_steal_sock(struct sk_buff *skb, > > } > > > > *prefetched = skb_sk_is_prefetched(skb); > > - if (*prefetched) > > + if (*prefetched) { > > +#if IS_ENABLED(CONFIG_SYN_COOKIES) > > + if (sk->sk_state == TCP_NEW_SYN_RECV && inet_reqsk(sk)->syncookie) { > > + struct request_sock *req = inet_reqsk(sk); > > + > > + *refcounted = false; > > + sk = req->rsk_listener; > > + req->rsk_listener = NULL; > > I am not sure about interactions with MPTCP. > > I would be nice to have their feedback. Matthieu, Mat, Paolo, could you double check if the change above is sane ? https://lore.kernel.org/bpf/20231214155424.67136-4-kuniyu@amazon.com/ Short sumamry: With this series, tc could allocate reqsk to skb->sk and set a listener to reqsk->rsk_listener, then __inet_lookup_skb() returns a listener in the same reuseport group, and skb is processed in the listener function flow, especially cookie_v[46]_check(). The only difference here is that skb->sk has reqsk, which does not have rsk_listener. > > > + return sk; > > + } > > +#endif > > *refcounted = sk_is_refcounted(sk); > > - else > > + } else { > > *refcounted = true; > > + } > > > > skb->destructor = NULL; > > skb->sk = NULL; > > -- > > 2.30.2
Powered by blists - more mailing lists