[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201210184915.4codwsoufxdhtj3o@kafai-mbp.dhcp.thefacebook.com>
Date: Thu, 10 Dec 2020 10:49:15 -0800
From: Martin KaFai Lau <kafai@...com>
To: Kuniyuki Iwashima <kuniyu@...zon.co.jp>
CC: <ast@...nel.org>, <benh@...zon.com>, <bpf@...r.kernel.org>,
<daniel@...earbox.net>, <davem@...emloft.net>,
<edumazet@...gle.com>, <kuba@...nel.org>, <kuni1840@...il.com>,
<linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>
Subject: Re: [PATCH v1 bpf-next 05/11] tcp: Migrate TCP_NEW_SYN_RECV requests.
On Thu, Dec 10, 2020 at 02:15:38PM +0900, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau <kafai@...com>
> Date: Wed, 9 Dec 2020 16:07:07 -0800
> > On Tue, Dec 01, 2020 at 11:44:12PM +0900, Kuniyuki Iwashima wrote:
> > > This patch renames reuseport_select_sock() to __reuseport_select_sock() and
> > > adds two wrapper function of it to pass the migration type defined in the
> > > previous commit.
> > >
> > > reuseport_select_sock : BPF_SK_REUSEPORT_MIGRATE_NO
> > > reuseport_select_migrated_sock : BPF_SK_REUSEPORT_MIGRATE_REQUEST
> > >
> > > As mentioned before, we have to select a new listener for TCP_NEW_SYN_RECV
> > > requests at receiving the final ACK or sending a SYN+ACK. Therefore, this
> > > patch also changes the code to call reuseport_select_migrated_sock() even
> > > if the listening socket is TCP_CLOSE. If we can pick out a listening socket
> > > from the reuseport group, we rewrite request_sock.rsk_listener and resume
> > > processing the request.
> > >
> > > Reviewed-by: Benjamin Herrenschmidt <benh@...zon.com>
> > > Signed-off-by: Kuniyuki Iwashima <kuniyu@...zon.co.jp>
> > > ---
> > > include/net/inet_connection_sock.h | 12 +++++++++++
> > > include/net/request_sock.h | 13 ++++++++++++
> > > include/net/sock_reuseport.h | 8 +++----
> > > net/core/sock_reuseport.c | 34 ++++++++++++++++++++++++------
> > > net/ipv4/inet_connection_sock.c | 13 ++++++++++--
> > > net/ipv4/tcp_ipv4.c | 9 ++++++--
> > > net/ipv6/tcp_ipv6.c | 9 ++++++--
> > > 7 files changed, 81 insertions(+), 17 deletions(-)
> > >
> > > diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
> > > index 2ea2d743f8fc..1e0958f5eb21 100644
> > > --- a/include/net/inet_connection_sock.h
> > > +++ b/include/net/inet_connection_sock.h
> > > @@ -272,6 +272,18 @@ static inline void inet_csk_reqsk_queue_added(struct sock *sk)
> > > reqsk_queue_added(&inet_csk(sk)->icsk_accept_queue);
> > > }
> > >
> > > +static inline void inet_csk_reqsk_queue_migrated(struct sock *sk,
> > > + struct sock *nsk,
> > > + struct request_sock *req)
> > > +{
> > > + reqsk_queue_migrated(&inet_csk(sk)->icsk_accept_queue,
> > > + &inet_csk(nsk)->icsk_accept_queue,
> > > + req);
> > > + sock_put(sk);
> > not sure if it is safe to do here.
> > IIUC, when the req->rsk_refcnt is held, it also holds a refcnt
> > to req->rsk_listener such that sock_hold(req->rsk_listener) is
> > safe because its sk_refcnt is not zero.
>
> I think it is safe to call sock_put() for the old listener here.
>
> Without this patchset, at receiving the final ACK or retransmitting
> SYN+ACK, if sk_state == TCP_CLOSE, sock_put(req->rsk_listener) is done
> by calling reqsk_put() twice in inet_csk_reqsk_queue_drop_and_put().
Note that in your example (final ACK), sock_put(req->rsk_listener) is
_only_ called when reqsk_put() can get refcount_dec_and_test(&req->rsk_refcnt)
to reach zero.
Here in this patch, it sock_put(req->rsk_listener) without req->rsk_refcnt
reaching zero.
Let says there are two cores holding two refcnt to req (one cnt for each core)
by looking up the req from ehash. One of the core do this migrate and
sock_put(req->rsk_listener). Another core does sock_hold(req->rsk_listener).
Core1 Core2
sock_put(req->rsk_listener)
sock_hold(req->rsk_listener)
> And then, we do `goto lookup;` and overwrite the sk.
>
> In the v2 patchset, refcount_inc_not_zero() is done for the new listener in
> reuseport_select_migrated_sock(), so we have to call sock_put() for the old
> listener instead to free it properly.
>
> ---8<---
> +struct sock *reuseport_select_migrated_sock(struct sock *sk, u32 hash,
> + struct sk_buff *skb)
> +{
> + struct sock *nsk;
> +
> + nsk = __reuseport_select_sock(sk, hash, skb, 0, BPF_SK_REUSEPORT_MIGRATE_REQUEST);
> + if (nsk && likely(refcount_inc_not_zero(&nsk->sk_refcnt)))
There is another potential issue here. The TCP_LISTEN nsk is protected
by rcu. refcount_inc_not_zero(&nsk->sk_refcnt) cannot be done if it
is not under rcu_read_lock().
The receive path may be ok as it is in rcu. You may need to check for
others.
> + return nsk;
> +
> + return NULL;
> +}
> +EXPORT_SYMBOL(reuseport_select_migrated_sock);
> ---8<---
> https://lore.kernel.org/netdev/20201207132456.65472-8-kuniyu@amazon.co.jp/
>
>
> > > + sock_hold(nsk);
> > > + req->rsk_listener = nsk;
It looks like there is another race here. What
if multiple cores try to update req->rsk_listener?
Powered by blists - more mailing lists