[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aQJDhkIsexabyGXf@horms.kernel.org>
Date: Wed, 29 Oct 2025 16:40:38 +0000
From: Simon Horman <horms@...nel.org>
To: Stefan Wiehler <stefan.wiehler@...ia.com>
Cc: Xin Long <lucien.xin@...il.com>,
"David S . Miller " <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Kuniyuki Iwashima <kuniyu@...gle.com>, linux-sctp@...r.kernel.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net v3 1/3] sctp: Hold RCU read lock while iterating over
address list
On Wed, Oct 29, 2025 at 04:38:44PM +0000, Simon Horman wrote:
> On Tue, Oct 28, 2025 at 05:12:26PM +0100, Stefan Wiehler wrote:
> > With CONFIG_PROVE_RCU_LIST=y and by executing
> >
> > $ netcat -l --sctp &
> > $ netcat --sctp localhost &
> > $ ss --sctp
> >
> > one can trigger the following Lockdep-RCU splat(s):
>
> ...
>
> > diff --git a/net/sctp/diag.c b/net/sctp/diag.c
> > index 996c2018f0e6..1a8761f87bf1 100644
> > --- a/net/sctp/diag.c
> > +++ b/net/sctp/diag.c
> > @@ -73,19 +73,23 @@ static int inet_diag_msg_sctpladdrs_fill(struct sk_buff *skb,
> > struct nlattr *attr;
> > void *info = NULL;
> >
> > + rcu_read_lock();
> > list_for_each_entry_rcu(laddr, address_list, list)
> > addrcnt++;
> > + rcu_read_unlock();
> >
> > attr = nla_reserve(skb, INET_DIAG_LOCALS, addrlen * addrcnt);
> > if (!attr)
> > return -EMSGSIZE;
> >
> > info = nla_data(attr);
>
> Hi Stefan,
>
> If the number of entries in list increases while rcu_read_lock is not held,
> between when addrcnt is calculated and when info is written, then can an
> overrun occur while writing info?
Oops, I now see that is addressed in patch 2/3.
Sorry for not reading that before sending my previous email.
>
> > + rcu_read_lock();
> > list_for_each_entry_rcu(laddr, address_list, list) {
> > memcpy(info, &laddr->a, sizeof(laddr->a));
> > memset(info + sizeof(laddr->a), 0, addrlen - sizeof(laddr->a));
> > info += addrlen;
> > }
> > + rcu_read_unlock();
> >
> > return 0;
> > }
> > --
> > 2.51.0
> >
Powered by blists - more mailing lists