[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241004203718.67792-1-kuniyu@amazon.com>
Date: Fri, 4 Oct 2024 13:37:18 -0700
From: Kuniyuki Iwashima <kuniyu@...zon.com>
To: <martin.lau@...ux.dev>
CC: <bpf@...r.kernel.org>, <edumazet@...gle.com>, <kuba@...nel.org>,
<kuniyu@...zon.com>, <netdev@...r.kernel.org>
Subject: Re: [Question]: A non NULL req->sk in tcp_rtx_synack. Not a fastopen connection.
From: Martin KaFai Lau <martin.lau@...ux.dev>
Date: Thu, 3 Oct 2024 21:00:20 -0700
> On 10/3/24 7:02 PM, Kuniyuki Iwashima wrote:
> > From: Martin KaFai Lau <martin.lau@...ux.dev>
> > Date: Thu, 3 Oct 2024 18:14:09 -0700
> >> Hi,
> >>
> >> We are seeing a use-after-free from a bpf prog attached to
> >> trace_tcp_retransmit_synack. The program passes the req->sk to the
> >> bpf_sk_storage_get_tracing kernel helper which does check for null before using it.
> >>
> >> fastopen is not used.
> >>
> >> We got a kfence report on use-after-free (pasted at the end). It is running with
> >> an older 6.4 kernel and we hardly hit this in production.
> >>
> >> From the upstream code, del_timer_sync() should have been done by
> >> inet_csk_reqsk_queue_drop() before "req->sk = child;" is assigned in
> >> inet_csk_reqsk_queue_add(). My understanding is the req->rsk_timer should have
> >> been stopped before the "req->sk = child;" assignment.
> >
> > There seems to be a small race window in reqsk_queue_unlink().
> >
> > expire_timers() first calls detach_timer(, true), which marks the timer
> > as not pending, and then calls reqsk_timer_handler().
> >
> > If reqsk_queue_unlink() calls timer_pending() just before expire_timers()
> > calls reqsk_timer_handler(), reqsk_queue_unlink() could miss
> > del_timer_sync() ?
>
> This seems to explain it. :)
>
> Does it mean there is a chance that the reqsk_timer_handler() may rearm the
> timer again and I guess only a few more synack will be sent in this case and
> should be no harm?
Ah, it seems possible. I was wondering how the timer can be delayed
until sk is freed. In such a case, the timer will just let the peer
generate some challenge ACKs.
>
> >
> > ---8<---
> > diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> > index 2c5632d4fddb..4ba47ee6c9da 100644
> > --- a/net/ipv4/inet_connection_sock.c
> > +++ b/net/ipv4/inet_connection_sock.c
> > @@ -1045,7 +1045,7 @@ static bool reqsk_queue_unlink(struct request_sock *req)
> > found = __sk_nulls_del_node_init_rcu(sk);
> > spin_unlock(lock);
> > }
> > - if (timer_pending(&req->rsk_timer) && del_timer_sync(&req->rsk_timer))
> > + if (del_timer_sync(&req->rsk_timer))
>
> It seems the reqsk_timer_handler() will also call reqsk_queue_unlink() through
> inet_csk_reqsk_queue_drop_and_put(). Not sure if the reqsk_timer_handler() can
> del_timer_sync() itself.
Exactly, it seems illegal to call it from the timer.
Then, we need a variant of inet_csk_reqsk_queue_drop() to see if
the caller is tiemr or not. (compile-test only)
---8<---
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index 2c5632d4fddb..2623964d8817 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -1045,21 +1045,31 @@ static bool reqsk_queue_unlink(struct request_sock *req)
found = __sk_nulls_del_node_init_rcu(sk);
spin_unlock(lock);
}
- if (timer_pending(&req->rsk_timer) && del_timer_sync(&req->rsk_timer))
- reqsk_put(req);
+
return found;
}
-bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
+static bool __inet_csk_reqsk_queue_drop(struct sock *sk,
+ struct request_sock *req,
+ bool from_timer)
{
bool unlinked = reqsk_queue_unlink(req);
+ if (!from_timer && del_timer_sync(&req->rsk_timer))
+ reqsk_put(req);
+
if (unlinked) {
reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req);
reqsk_put(req);
}
+
return unlinked;
}
+
+bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
+{
+ return __inet_csk_reqsk_queue_drop(sk, req, false);
+}
EXPORT_SYMBOL(inet_csk_reqsk_queue_drop);
void inet_csk_reqsk_queue_drop_and_put(struct sock *sk, struct request_sock *req)
@@ -1152,7 +1162,7 @@ static void reqsk_timer_handler(struct timer_list *t)
if (!inet_ehash_insert(req_to_sk(nreq), req_to_sk(oreq), NULL)) {
/* delete timer */
- inet_csk_reqsk_queue_drop(sk_listener, nreq);
+ __inet_csk_reqsk_queue_drop(sk_listener, nreq, true);
goto no_ownership;
}
@@ -1178,7 +1188,8 @@ static void reqsk_timer_handler(struct timer_list *t)
}
drop:
- inet_csk_reqsk_queue_drop_and_put(oreq->rsk_listener, oreq);
+ __inet_csk_reqsk_queue_drop(sk_listener, nreq, true);
+ reqsk_put(req);
}
static bool reqsk_queue_hash_req(struct request_sock *req,
---8<---
Powered by blists - more mailing lists