[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z0fEm2EmZ6q1c9Mu@LQ3V64L9R2>
Date: Wed, 27 Nov 2024 17:17:15 -0800
From: Joe Damato <jdamato@...tly.com>
To: Guenter Roeck <linux@...ck-us.net>
Cc: netdev@...r.kernel.org, mkarsten@...terloo.ca, skhawaja@...gle.com,
sdf@...ichev.me, bjorn@...osinc.com, amritha.nambiar@...el.com,
sridhar.samudrala@...el.com, willemdebruijn.kernel@...il.com,
edumazet@...gle.com, Jakub Kicinski <kuba@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Paolo Abeni <pabeni@...hat.com>, Jonathan Corbet <corbet@....net>,
Jiri Pirko <jiri@...nulli.us>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Johannes Berg <johannes.berg@...el.com>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>, pcnet32@...ntier.com
Subject: Re: [net-next v6 5/9] net: napi: Add napi_config
On Wed, Nov 27, 2024 at 01:43:14PM -0800, Guenter Roeck wrote:
> On 11/27/24 10:51, Joe Damato wrote:
> > On Wed, Nov 27, 2024 at 09:43:54AM -0800, Guenter Roeck wrote:
> > > Hi,
> > >
> > > On Fri, Oct 11, 2024 at 06:45:00PM +0000, Joe Damato wrote:
[...]
> > It seems this was triggered because before the identified commit,
> > napi_enable did not call napi_hash_add (and thus did not take the
> > napi_hash_lock).
> >
> > So, I agree there is an inversion; I can't say for sure if this
> > would cause a deadlock in certain situations. It seems like
> > napi_hash_del in the close path will return, so the inversion
> > doesn't seem like it'd lead to a deadlock, but I am not an expert in
> > this and could certainly be wrong.
> >
> > I wonder if a potential fix for this would be in the driver's close
> > function?
> >
> > In pcnet32_open the order is:
> > lock(lp->lock)
> > napi_enable
> > netif_start_queue
> > mod_timer(watchdog)
> > unlock(lp->lock)
> >
> > Perhaps pcnet32_close should be the same?
> >
> > I've included an example patch below for pcnet32_close and I've CC'd
> > the maintainer of pcnet32 that is not currently CC'd.
> >
> > Guenter: Is there any change you might be able to test the proposed
> > patch below?
> >
>
> I moved the spinlock after del_timer_sync() because it is not a good idea
> to hold a spinlock when calling that function. That results in:
>
> [ 10.646956] BUG: sleeping function called from invalid context at net/core/dev.c:6775
> [ 10.647142] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 1817, name: ip
> [ 10.647237] preempt_count: 1, expected: 0
> [ 10.647319] 2 locks held by ip/1817:
> [ 10.647383] #0: ffffffff81ded990 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x22a/0x74c
> [ 10.647880] #1: ff6000000498ccb0 (&lp->lock){-.-.}-{3:3}, at: pcnet32_close+0x40/0x126
[...]
> This is due to might_sleep() at the beginning of napi_disable(). So it doesn't
> work as intended, it just replaces one problem with another.
Thanks for testing that. And you are right, it is also not correct.
I will give it some thought to see if I can think of something
better.
Maybe Don will have some ideas.
> > Don: Would you mind taking a look to see if this change is sensible?
> >
> > Netdev maintainers: at a higher level, I'm not sure how many other
> > drivers might have locking patterns like this that commit
> > 86e25f40aa1e ("net: napi: Add napi_config") will break in a similar
> > manner.
> >
> > Do I:
> > - comb through drivers trying to identify these, and/or
>
> Coccinelle, checking for napi_enable calls under spinlock, points to:
>
> napi_enable called under spin_lock_irqsave from drivers/net/ethernet/via/via-velocity.c:2325
> napi_enable called under spin_lock_irqsave from drivers/net/can/grcan.c:1076
> napi_enable called under spin_lock from drivers/net/ethernet/marvell/mvneta.c:4388
> napi_enable called under spin_lock_irqsave from drivers/net/ethernet/amd/pcnet32.c:2104
I checked the 3 cases above other than pcnet32 and they appear to be
false positives to me.
Guenter: would you mind sending me your cocci script? Mostly for
selfish reasons; I'd like to see how you did it so I can learn more
:) Feel free to do so off list if you prefer.
I tried to write my first coccinelle script (which you can find
below) that is probably wrong, but it attempts to detect:
- interrupt routines that hold locks
- in drivers that call napi_enable between a lock/unlock
I couldn't figure out how to get regexps to work in my script, so I
made a couple variants of the script for each of the spin_lock_*
variants and ran them all.
Only one offender was detected: pcnet32.
So, I guess the question to put out there to maintainers / others on
the list is:
- There seems to be at least 1 driver affected (pcnet32). There
might be others, but my simple (and likely incorrect) cocci
script below couldn't detect any with this particular bug shape.
Worth mentioning: there could be other bug shapes that trigger
an inversion that I am currently unaware of.
- As far as I can tell, there are three ways to proceed:
1. Find and fix all drivers which broke (pcnet32 being the only
known driver at this point), or
2. Disable IRQs when taking the lock in napi_hash_del, or
3. Move the napi hash add/remove out of napi enable/disable.
Happy to proceed however seems most reasonable to the maintainers,
please let me know.
My cocci script follows; as noted above I am too much of a noob and
couldn't figure out how to use regexps to match the different
spin_lock* variants, so I simply made multiple versions of this
script for each variant:
virtual report
@napi@
identifier func0;
position p0;
@@
func0(...)
{
...
spin_lock_irqsave(...)
...
napi_enable@p0(...)
...
spin_unlock_irqrestore(...)
...
}
@u@
position p;
identifier func;
typedef irqreturn_t;
@@
irqreturn_t func (...)
{
...
spin_lock@p(...)
...
}
@script:python depends on napi && u@
p << u.p;
func << u.func;
disable << napi.p0;
@@
print("* file: %s irq handler %s takes lock on line %s and calls napi_enable under lock %s" % (p[0].file,func,p[0].line,disable[0].line))
Powered by blists - more mailing lists