[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZzOMfec9pRhfua-6@LQ3V64L9R2>
Date: Tue, 12 Nov 2024 09:12:29 -0800
From: Joe Damato <jdamato@...tly.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: netdev@...r.kernel.org, mkarsten@...terloo.ca, skhawaja@...gle.com,
sdf@...ichev.me, bjorn@...osinc.com, amritha.nambiar@...el.com,
sridhar.samudrala@...el.com, willemdebruijn.kernel@...il.com,
edumazet@...gle.com, Jakub Kicinski <kuba@...nel.org>,
Donald Hunter <donald.hunter@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Mina Almasry <almasrymina@...gle.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [net-next v6 6/9] netdev-genl: Support setting per-NAPI config
values
On Tue, Nov 12, 2024 at 10:17:40AM +0100, Paolo Abeni wrote:
> On 10/11/24 20:45, Joe Damato wrote:
> > +int netdev_nl_napi_set_doit(struct sk_buff *skb, struct genl_info *info)
> > +{
> > + struct napi_struct *napi;
> > + unsigned int napi_id;
> > + int err;
> > +
> > + if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_NAPI_ID))
> > + return -EINVAL;
> > +
> > + napi_id = nla_get_u32(info->attrs[NETDEV_A_NAPI_ID]);
> > +
> > + rtnl_lock();
> > +
> > + napi = napi_by_id(napi_id);
>
> AFAICS the above causes a RCU splat in the selftests:
>
> https://netdev-3.bots.linux.dev/vmksft-net-dbg/results/856342/61-busy-poll-test-sh/stderr
>
> because napi_by_id() only checks for the RCU lock.
>
> Could you please have a look?
Thanks for letting me know.
I rebuilt my kernel with CONFIG_PROVE_RCU_LIST and a couple other
debugging options and I was able to reproduce the splat you
mentioned.
I took a look and it looks like there might be two things:
- netdev_nl_napi_set_doit needs to call rcu_read_lock /
rcu_read_unlock, which would be a Fixes on the commit in the
series just merged, and
- netdev_nl_napi_get_doit also has the same issue and should be
fixed in a separate commit with its own fixes tag.
If that sounds right to you, I'll propose a short series of 2
patches, 1 to fix each.
Let me know if that sounds OK?
In the meantime, I'm rebuilding a kernel now to ensure my proposed
fix fixes the splat.
Powered by blists - more mailing lists