[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iKiT-XvO00cygyMcc-EqToPLuyU3wX+jthQW7YnW7o2Bg@mail.gmail.com>
Date: Fri, 12 Jan 2024 10:42:25 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Zhengchao Shao <shaozhengchao@...wei.com>
Cc: netdev@...r.kernel.org, davem@...emloft.net, dsahern@...nel.org,
kuba@...nel.org, pabeni@...hat.com, weiyongjun1@...wei.com,
yuehaibing@...wei.com
Subject: Re: [PATCH net-next] tcp: do not hold spinlock when sk state is not TCP_LISTEN
On Fri, Jan 12, 2024 at 2:26 AM Zhengchao Shao <shaozhengchao@...wei.com> wrote:
>
> When I run syz's reproduction C program locally, it causes the following
> issue:
>
> The issue triggering process is analyzed as follows:
> Thread A Thread B
> tcp_v4_rcv //receive ack TCP packet inet_shutdown
> tcp_check_req tcp_disconnect //disconnect sock
> ... tcp_set_state(sk, TCP_CLOSE)
> inet_csk_complete_hashdance ...
> inet_csk_reqsk_queue_add inet_listen //start listen
> spin_lock(&queue->rskq_lock) inet_csk_listen_start
> ... reqsk_queue_alloc
> ... spin_lock_init
> spin_unlock(&queue->rskq_lock) //warning
>
> When the socket receives the ACK packet during the three-way handshake,
> it will hold spinlock. And then the user actively shutdowns the socket
> and listens to the socket immediately, the spinlock will be initialized.
> When the socket is going to release the spinlock, a warning is generated.
>
> The rskq_lock lock protects only the request_sock_queue structure.
> Therefore, the rskq_lock lock could be not used when the TCP state is
> not listen in inet_csk_reqsk_queue_add.
>
> Fixes: fff1f3001cc5 ("tcp: add a spinlock to protect struct request_sock_queue")
> Signed-off-by: Zhengchao Shao <shaozhengchao@...wei.com>
> ---
> net/ipv4/inet_connection_sock.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> index 8e2eb1793685..b100a89c3d98 100644
> --- a/net/ipv4/inet_connection_sock.c
> +++ b/net/ipv4/inet_connection_sock.c
> @@ -1295,11 +1295,11 @@ struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
> {
> struct request_sock_queue *queue = &inet_csk(sk)->icsk_accept_queue;
>
> - spin_lock(&queue->rskq_lock);
> if (unlikely(sk->sk_state != TCP_LISTEN)) {
> inet_child_forget(sk, req, child);
> child = NULL;
> } else {
> + spin_lock(&queue->rskq_lock);
> req->sk = child;
> req->dl_next = NULL;
> if (queue->rskq_accept_head == NULL)
> @@ -1308,8 +1308,8 @@ struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
> queue->rskq_accept_tail->dl_next = req;
> queue->rskq_accept_tail = req;
> sk_acceptq_added(sk);
> + spin_unlock(&queue->rskq_lock);
> }
> - spin_unlock(&queue->rskq_lock);
> return child;
> }
> EXPORT_SYMBOL(inet_csk_reqsk_queue_add);
> --
> 2.34.1
>
This is not how I would fix the issue, this would be still racy,
because 'listener' sk_state can change any time.
queue->fastopenq.lock would probably have a similar issue.
Please make sure we init the spinlock(s) once.
Powered by blists - more mailing lists