[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e49be1a312a449d435af3bfef42493ca9b984869.camel@oracle.com>
Date: Thu, 8 Feb 2024 17:03:05 +0000
From: Allison Henderson <allison.henderson@...cle.com>
To: "yanjun.zhu@...ux.dev" <yanjun.zhu@...ux.dev>,
"netdev@...r.kernel.org"
<netdev@...r.kernel.org>
CC: "rds-devel@....oracle.com" <rds-devel@....oracle.com>,
Santosh Shilimkar
<santosh.shilimkar@...cle.com>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"kuba@...nel.org" <kuba@...nel.org>,
"linux-rdma@...r.kernel.org"
<linux-rdma@...r.kernel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>,
"davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [PATCH v3 1/1] net:rds: Fix possible deadlock in rds_message_put
On Thu, 2024-02-08 at 15:37 +0800, Zhu Yanjun wrote:
> 在 2024/2/8 7:38, allison.henderson@...cle.com 写道:
> > From: Allison Henderson <allison.henderson@...cle.com>
> >
> > Functions rds_still_queued and rds_clear_recv_queue lock a given
> > socket
> > in order to safely iterate over the incoming rds messages. However
> > calling rds_inc_put while under this lock creates a potential
> > deadlock.
> > rds_inc_put may eventually call rds_message_purge, which will lock
> > m_rs_lock. This is the incorrect locking order since m_rs_lock is
> > meant to be locked before the socket. To fix this, we move the
> > message
> > item to a local list or variable that wont need rs_recv_lock
> > protection.
> > Then we can safely call rds_inc_put on any item stored locally
> > after
> > rs_recv_lock is released.
> >
> > Fixes: bdbe6fbc6a2f (RDS: recv.c)
>
> A trivial problem,
> Based on the file
> https://urldefense.com/v3/__https://www.kernel.org/doc/Documentation/process/submitting-patches.rst__;!!ACWV5N9M2RV99hQ!MiEeS6J7X92LD8hhF5jHl882A8vJtDEXfxPdBG3cmE7Qhsic--cSQKgEbPDF1W-i_ZXdM64n5coFpc1xEIOZWmFhoubN$
> ,
> Fixes tag should be the following?
>
> Fixes: bdbe6fbc6a2f ("RDS: recv.c")
Ah, I had missed the quotations. Will update. Thanks for the review!
Allison
>
> Thanks,
> Zhu Yanjun
>
> > Reported-by: syzbot+f9db6ff27b9bfdcfeca0@...kaller.appspotmail.com
> > Reported-by: syzbot+dcd73ff9291e6d34b3ab@...kaller.appspotmail.com
> >
> > Signed-off-by: Allison Henderson <allison.henderson@...cle.com>
> > ---
> > net/rds/recv.c | 13 +++++++++++--
> > 1 file changed, 11 insertions(+), 2 deletions(-)
> >
> > diff --git a/net/rds/recv.c b/net/rds/recv.c
> > index c71b923764fd..5627f80013f8 100644
> > --- a/net/rds/recv.c
> > +++ b/net/rds/recv.c
> > @@ -425,6 +425,7 @@ static int rds_still_queued(struct rds_sock
> > *rs, struct rds_incoming *inc,
> > struct sock *sk = rds_rs_to_sk(rs);
> > int ret = 0;
> > unsigned long flags;
> > + struct rds_incoming *to_drop = NULL;
> >
> > write_lock_irqsave(&rs->rs_recv_lock, flags);
> > if (!list_empty(&inc->i_item)) {
> > @@ -435,11 +436,14 @@ static int rds_still_queued(struct rds_sock
> > *rs, struct rds_incoming *inc,
> > -be32_to_cpu(inc-
> > >i_hdr.h_len),
> > inc->i_hdr.h_dport);
> > list_del_init(&inc->i_item);
> > - rds_inc_put(inc);
> > + to_drop = inc;
> > }
> > }
> > write_unlock_irqrestore(&rs->rs_recv_lock, flags);
> >
> > + if (to_drop)
> > + rds_inc_put(to_drop);
> > +
> > rdsdebug("inc %p rs %p still %d dropped %d\n", inc, rs,
> > ret, drop);
> > return ret;
> > }
> > @@ -758,16 +762,21 @@ void rds_clear_recv_queue(struct rds_sock
> > *rs)
> > struct sock *sk = rds_rs_to_sk(rs);
> > struct rds_incoming *inc, *tmp;
> > unsigned long flags;
> > + LIST_HEAD(to_drop);
> >
> > write_lock_irqsave(&rs->rs_recv_lock, flags);
> > list_for_each_entry_safe(inc, tmp, &rs->rs_recv_queue,
> > i_item) {
> > rds_recv_rcvbuf_delta(rs, sk, inc->i_conn->c_lcong,
> > -be32_to_cpu(inc-
> > >i_hdr.h_len),
> > inc->i_hdr.h_dport);
> > + list_move(&inc->i_item, &to_drop);
> > + }
> > + write_unlock_irqrestore(&rs->rs_recv_lock, flags);
> > +
> > + list_for_each_entry_safe(inc, tmp, &to_drop, i_item) {
> > list_del_init(&inc->i_item);
> > rds_inc_put(inc);
> > }
> > - write_unlock_irqrestore(&rs->rs_recv_lock, flags);
> > }
> >
> > /*
>
Powered by blists - more mailing lists