lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210521051548.48515-1-kuniyu@amazon.co.jp>
Date:   Fri, 21 May 2021 14:15:48 +0900
From:   Kuniyuki Iwashima <kuniyu@...zon.co.jp>
To:     <kafai@...com>
CC:     <andrii@...nel.org>, <ast@...nel.org>, <benh@...zon.com>,
        <bpf@...r.kernel.org>, <daniel@...earbox.net>,
        <davem@...emloft.net>, <edumazet@...gle.com>, <kuba@...nel.org>,
        <kuni1840@...il.com>, <kuniyu@...zon.co.jp>,
        <linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>
Subject: Re: [PATCH v6 bpf-next 03/11] tcp: Keep TCP_CLOSE sockets in the reuseport group.

From:   Martin KaFai Lau <kafai@...com>
Date:   Thu, 20 May 2021 21:47:25 -0700
> On Fri, May 21, 2021 at 09:26:39AM +0900, Kuniyuki Iwashima wrote:
> > From:   Martin KaFai Lau <kafai@...com>
> > Date:   Thu, 20 May 2021 16:39:06 -0700
> > > On Fri, May 21, 2021 at 07:54:48AM +0900, Kuniyuki Iwashima wrote:
> > > > From:   Martin KaFai Lau <kafai@...com>
> > > > Date:   Thu, 20 May 2021 14:22:01 -0700
> > > > > On Thu, May 20, 2021 at 05:51:17PM +0900, Kuniyuki Iwashima wrote:
> > > > > > From:   Martin KaFai Lau <kafai@...com>
> > > > > > Date:   Wed, 19 May 2021 23:26:48 -0700
> > > > > > > On Mon, May 17, 2021 at 09:22:50AM +0900, Kuniyuki Iwashima wrote:
> > > > > > > 
> > > > > > > > +static int reuseport_resurrect(struct sock *sk, struct sock_reuseport *old_reuse,
> > > > > > > > +			       struct sock_reuseport *reuse, bool bind_inany)
> > > > > > > > +{
> > > > > > > > +	if (old_reuse == reuse) {
> > > > > > > > +		/* If sk was in the same reuseport group, just pop sk out of
> > > > > > > > +		 * the closed section and push sk into the listening section.
> > > > > > > > +		 */
> > > > > > > > +		__reuseport_detach_closed_sock(sk, old_reuse);
> > > > > > > > +		__reuseport_add_sock(sk, old_reuse);
> > > > > > > > +		return 0;
> > > > > > > > +	}
> > > > > > > > +
> > > > > > > > +	if (!reuse) {
> > > > > > > > +		/* In bind()/listen() path, we cannot carry over the eBPF prog
> > > > > > > > +		 * for the shutdown()ed socket. In setsockopt() path, we should
> > > > > > > > +		 * not change the eBPF prog of listening sockets by attaching a
> > > > > > > > +		 * prog to the shutdown()ed socket. Thus, we will allocate a new
> > > > > > > > +		 * reuseport group and detach sk from the old group.
> > > > > > > > +		 */
> > > > > > > For the reuseport_attach_prog() path, I think it needs to consider
> > > > > > > the reuse->num_closed_socks != 0 case also and that should belong
> > > > > > > to the resurrect case.  For example, when
> > > > > > > sk_unhashed(sk) but sk->sk_reuseport == 0.
> > > > > > 
> > > > > > In the path, reuseport_resurrect() is called from reuseport_alloc() only
> > > > > > if reuse->num_closed_socks != 0.
> > > > > > 
> > > > > > 
> > > > > > > @@ -92,6 +117,14 @@ int reuseport_alloc(struct sock *sk, bool bind_inany)
> > > > > > >  	reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
> > > > > > >  					  lockdep_is_held(&reuseport_lock));
> > > > > > >  	if (reuse) {
> > > > > > > +		if (reuse->num_closed_socks) {
> > > > > > 
> > > > > > But, should this be
> > > > > > 
> > > > > > 	if (sk->sk_state == TCP_CLOSE && reuse->num_closed_socks)
> > > > > > 
> > > > > > because we need not allocate a new group when we attach a bpf prog to
> > > > > > listeners?
> > > > > The reuseport_alloc() is fine as is.  No need to change.
> > > > 
> > > > I missed sk_unhashed(sk) prevents calling reuseport_alloc()
> > > > if sk_state == TCP_LISTEN. I'll keep it as is.
> > > > 
> > > > 
> > > > > 
> > > > > I should have copied reuseport_attach_prog() in the last reply and
> > > > > commented there instead.
> > > > > 
> > > > > I meant reuseport_attach_prog() needs a change.  In reuseport_attach_prog(),
> > > > > iiuc, currently passing the "else if (!rcu_access_pointer(sk->sk_reuseport_cb))"
> > > > > check implies the sk was (and still is) hashed with sk_reuseport enabled
> > > > > because the current behavior would have set sk_reuseport_cb to NULL during
> > > > > unhash but it is no longer true now.  For example, this will break:
> > > > > 
> > > > > 1. shutdown(lsk); /* lsk was bound with sk_reuseport enabled */
> > > > > 2. setsockopt(lsk, ..., SO_REUSEPORT, &zero, ...); /* disable sk_reuseport */
> > > > > 3. setsockopt(lsk, ..., SO_ATTACH_REUSEPORT_EBPF, &prog_fd, ...);
> > > > >    ^---- /* This will work now because sk_reuseport_cb is not NULL.
> > > > >           * However, it shouldn't be allowed.
> > > > > 	  */
> > > > 
> > > > Thank you for explanation, I understood the case.
> > > > 
> > > > Exactly, I've confirmed that the case succeeded in the setsockopt() and I
> > > > could change the active listeners' prog via a shutdowned socket.
> > > > 
> > > > 
> > > > > 
> > > > > I am thinking something like this (uncompiled code):
> > > > > 
> > > > > int reuseport_attach_prog(struct sock *sk, struct bpf_prog *prog)
> > > > > {
> > > > > 	struct sock_reuseport *reuse;
> > > > > 	struct bpf_prog *old_prog;
> > > > > 
> > > > > 	if (sk_unhashed(sk)) {
> > > > > 		int err;
> > > > > 
> > > > > 		if (!sk->sk_reuseport)
> > > > > 			return -EINVAL;
> > > > > 
> > > > > 		err = reuseport_alloc(sk, false);
> > > > > 		if (err)
> > > > > 			return err;
> > > > > 	} else if (!rcu_access_pointer(sk->sk_reuseport_cb)) {
> > > > > 		/* The socket wasn't bound with SO_REUSEPORT */
> > > > > 		return -EINVAL;
> > > > > 	}
> > > > > 
> > > > > 	/* ... */
> > > > > }
> > > > > 
> > > > > WDYT?
> > > > 
> > > > I tested this change worked fine. I think this change should be added in
> > > > reuseport_detach_prog() also.
> > > > 
> > > > ---8<---
> > > > int reuseport_detach_prog(struct sock *sk)
> > > > {
> > > >         struct sock_reuseport *reuse;
> > > >         struct bpf_prog *old_prog;
> > > > 
> > > >         if (!rcu_access_pointer(sk->sk_reuseport_cb))
> > > > 		return sk->sk_reuseport ? -ENOENT : -EINVAL;
> > > > ---8<---
> > > Right, a quick thought is something like this for detach:
> > > 
> > > 	spin_lock_bh(&reuseport_lock);
> > > 	reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
> > > 					  lockdep_is_held(&reuseport_lock));
> > 
> > Is this necessary because reuseport_grow() can detach sk?
> > 
> >         if (!reuse) {
> >                 spin_unlock_bh(&reuseport_lock);
> >                 return -ENOENT;
> >         }
> Yes, it is needed.  Please add a comment for the reuseport_grow() case also.

I see, I'll add this change in the next spin.
Thank you!

---8<---
@@ -608,13 +612,24 @@ int reuseport_detach_prog(struct sock *sk)
        struct sock_reuseport *reuse;
        struct bpf_prog *old_prog;
 
-       if (!rcu_access_pointer(sk->sk_reuseport_cb))
-               return sk->sk_reuseport ? -ENOENT : -EINVAL;
-
        old_prog = NULL;
        spin_lock_bh(&reuseport_lock);
        reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
                                          lockdep_is_held(&reuseport_lock));
+
+       /* reuse must be checked after acquiring the reuseport_lock
+        * because reuseport_grow() can detach a closed sk.
+        */
+       if (!reuse) {
+               spin_unlock_bh(&reuseport_lock);
+               return sk->sk_reuseport ? -ENOENT : -EINVAL;
+       }
+
+       if (sk_unhashed(sk) && reuse->num_closed_socks) {
+               spin_unlock_bh(&reuseport_lock);
+               return -ENOENT;
+       }
+
        old_prog = rcu_replace_pointer(reuse->prog, old_prog,
                                       lockdep_is_held(&reuseport_lock));
        spin_unlock_bh(&reuseport_lock);
---8<---


> 
> > 
> > Then we can remove rcu_access_pointer() check and move sk_reuseport check
> > here.
> Make sense.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ