[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c2ff1709-2e4a-4844-af86-216ae678be0b@linux.dev>
Date: Thu, 8 Feb 2024 15:37:18 +0800
From: Zhu Yanjun <yanjun.zhu@...ux.dev>
To: allison.henderson@...cle.com, netdev@...r.kernel.org
Cc: rds-devel@....oracle.com, linux-rdma@...r.kernel.org, pabeni@...hat.com,
kuba@...nel.org, edumazet@...gle.com, davem@...emloft.net,
santosh.shilimkar@...cle.com
Subject: Re: [PATCH v3 1/1] net:rds: Fix possible deadlock in rds_message_put
在 2024/2/8 7:38, allison.henderson@...cle.com 写道:
> From: Allison Henderson <allison.henderson@...cle.com>
>
> Functions rds_still_queued and rds_clear_recv_queue lock a given socket
> in order to safely iterate over the incoming rds messages. However
> calling rds_inc_put while under this lock creates a potential deadlock.
> rds_inc_put may eventually call rds_message_purge, which will lock
> m_rs_lock. This is the incorrect locking order since m_rs_lock is
> meant to be locked before the socket. To fix this, we move the message
> item to a local list or variable that wont need rs_recv_lock protection.
> Then we can safely call rds_inc_put on any item stored locally after
> rs_recv_lock is released.
>
> Fixes: bdbe6fbc6a2f (RDS: recv.c)
A trivial problem,
Based on the file
https://www.kernel.org/doc/Documentation/process/submitting-patches.rst,
Fixes tag should be the following?
Fixes: bdbe6fbc6a2f ("RDS: recv.c")
Thanks,
Zhu Yanjun
> Reported-by: syzbot+f9db6ff27b9bfdcfeca0@...kaller.appspotmail.com
> Reported-by: syzbot+dcd73ff9291e6d34b3ab@...kaller.appspotmail.com
>
> Signed-off-by: Allison Henderson <allison.henderson@...cle.com>
> ---
> net/rds/recv.c | 13 +++++++++++--
> 1 file changed, 11 insertions(+), 2 deletions(-)
>
> diff --git a/net/rds/recv.c b/net/rds/recv.c
> index c71b923764fd..5627f80013f8 100644
> --- a/net/rds/recv.c
> +++ b/net/rds/recv.c
> @@ -425,6 +425,7 @@ static int rds_still_queued(struct rds_sock *rs, struct rds_incoming *inc,
> struct sock *sk = rds_rs_to_sk(rs);
> int ret = 0;
> unsigned long flags;
> + struct rds_incoming *to_drop = NULL;
>
> write_lock_irqsave(&rs->rs_recv_lock, flags);
> if (!list_empty(&inc->i_item)) {
> @@ -435,11 +436,14 @@ static int rds_still_queued(struct rds_sock *rs, struct rds_incoming *inc,
> -be32_to_cpu(inc->i_hdr.h_len),
> inc->i_hdr.h_dport);
> list_del_init(&inc->i_item);
> - rds_inc_put(inc);
> + to_drop = inc;
> }
> }
> write_unlock_irqrestore(&rs->rs_recv_lock, flags);
>
> + if (to_drop)
> + rds_inc_put(to_drop);
> +
> rdsdebug("inc %p rs %p still %d dropped %d\n", inc, rs, ret, drop);
> return ret;
> }
> @@ -758,16 +762,21 @@ void rds_clear_recv_queue(struct rds_sock *rs)
> struct sock *sk = rds_rs_to_sk(rs);
> struct rds_incoming *inc, *tmp;
> unsigned long flags;
> + LIST_HEAD(to_drop);
>
> write_lock_irqsave(&rs->rs_recv_lock, flags);
> list_for_each_entry_safe(inc, tmp, &rs->rs_recv_queue, i_item) {
> rds_recv_rcvbuf_delta(rs, sk, inc->i_conn->c_lcong,
> -be32_to_cpu(inc->i_hdr.h_len),
> inc->i_hdr.h_dport);
> + list_move(&inc->i_item, &to_drop);
> + }
> + write_unlock_irqrestore(&rs->rs_recv_lock, flags);
> +
> + list_for_each_entry_safe(inc, tmp, &to_drop, i_item) {
> list_del_init(&inc->i_item);
> rds_inc_put(inc);
> }
> - write_unlock_irqrestore(&rs->rs_recv_lock, flags);
> }
>
> /*
Powered by blists - more mailing lists