[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f04406c5-f805-4de3-8a7c-abfdfd91a501@roeck-us.net>
Date: Wed, 27 Nov 2024 13:43:14 -0800
From: Guenter Roeck <linux@...ck-us.net>
To: Joe Damato <jdamato@...tly.com>, netdev@...r.kernel.org,
mkarsten@...terloo.ca, skhawaja@...gle.com, sdf@...ichev.me,
bjorn@...osinc.com, amritha.nambiar@...el.com, sridhar.samudrala@...el.com,
willemdebruijn.kernel@...il.com, edumazet@...gle.com,
Jakub Kicinski <kuba@...nel.org>, "David S. Miller" <davem@...emloft.net>,
Paolo Abeni <pabeni@...hat.com>, Jonathan Corbet <corbet@....net>,
Jiri Pirko <jiri@...nulli.us>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Johannes Berg <johannes.berg@...el.com>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>, pcnet32@...ntier.com
Subject: Re: [net-next v6 5/9] net: napi: Add napi_config
On 11/27/24 10:51, Joe Damato wrote:
> On Wed, Nov 27, 2024 at 09:43:54AM -0800, Guenter Roeck wrote:
>> Hi,
>>
>> On Fri, Oct 11, 2024 at 06:45:00PM +0000, Joe Damato wrote:
>>> Add a persistent NAPI config area for NAPI configuration to the core.
>>> Drivers opt-in to setting the persistent config for a NAPI by passing an
>>> index when calling netif_napi_add_config.
>>>
>>> napi_config is allocated in alloc_netdev_mqs, freed in free_netdev
>>> (after the NAPIs are deleted).
>>>
>>> Drivers which call netif_napi_add_config will have persistent per-NAPI
>>> settings: NAPI IDs, gro_flush_timeout, and defer_hard_irq settings.
>>>
>>> Per-NAPI settings are saved in napi_disable and restored in napi_enable.
>>>
>>> Co-developed-by: Martin Karsten <mkarsten@...terloo.ca>
>>> Signed-off-by: Martin Karsten <mkarsten@...terloo.ca>
>>> Signed-off-by: Joe Damato <jdamato@...tly.com>
>>> Reviewed-by: Jakub Kicinski <kuba@...nel.org>
>>
>> This patch triggers a lock inversion message on pcnet Ethernet adapters.
>
> Thanks for the report. I am not familiar with the pcnet driver, but
> took some time now to read the report below and the driver code.
>
> I could definitely be reading the output incorrectly (if so please
> let me know), but it seems like the issue can be triggered in this
> case:
>
> CPU 0:
> pcnet32_open
> lock(lp->lock)
> napi_enable
> napi_hash_add
> lock(napi_hash_lock)
> unlock(napi_hash_lock)
> unlock(lp->lock)
>
>
> Meanwhile on CPU 1:
> pcnet32_close
> napi_disable
> napi_hash_del
> lock(napi_hash_lock)
> unlock(napi_hash_lock)
> lock(lp->lock)
> [... other code ...]
> unlock(lp->lock)
> [... other code ...]
> lock(lp->lock)
> [... other code ...]
> unlock(lp->lock)
>
> In other words: while the close path is holding napi_hash_lock (and
> before it acquires lp->lock), the enable path takes lp->lock and
> then napi_hash_lock.
>
> It seems this was triggered because before the identified commit,
> napi_enable did not call napi_hash_add (and thus did not take the
> napi_hash_lock).
>
> So, I agree there is an inversion; I can't say for sure if this
> would cause a deadlock in certain situations. It seems like
> napi_hash_del in the close path will return, so the inversion
> doesn't seem like it'd lead to a deadlock, but I am not an expert in
> this and could certainly be wrong.
>
> I wonder if a potential fix for this would be in the driver's close
> function?
>
> In pcnet32_open the order is:
> lock(lp->lock)
> napi_enable
> netif_start_queue
> mod_timer(watchdog)
> unlock(lp->lock)
>
> Perhaps pcnet32_close should be the same?
>
> I've included an example patch below for pcnet32_close and I've CC'd
> the maintainer of pcnet32 that is not currently CC'd.
>
> Guenter: Is there any change you might be able to test the proposed
> patch below?
>
I moved the spinlock after del_timer_sync() because it is not a good idea
to hold a spinlock when calling that function. That results in:
[ 10.646956] BUG: sleeping function called from invalid context at net/core/dev.c:6775
[ 10.647142] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 1817, name: ip
[ 10.647237] preempt_count: 1, expected: 0
[ 10.647319] 2 locks held by ip/1817:
[ 10.647383] #0: ffffffff81ded990 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x22a/0x74c
[ 10.647880] #1: ff6000000498ccb0 (&lp->lock){-.-.}-{3:3}, at: pcnet32_close+0x40/0x126
[ 10.648050] irq event stamp: 3720
[ 10.648102] hardirqs last enabled at (3719): [<ffffffff80decaf4>] _raw_spin_unlock_irqrestore+0x54/0x62
[ 10.648204] hardirqs last disabled at (3720): [<ffffffff80dec8a2>] _raw_spin_lock_irqsave+0x5e/0x64
[ 10.648301] softirqs last enabled at (3712): [<ffffffff8001efca>] handle_softirqs+0x3e6/0x4a2
[ 10.648396] softirqs last disabled at (3631): [<ffffffff80ded6cc>] __do_softirq+0x12/0x1a
[ 10.648666] CPU: 0 UID: 0 PID: 1817 Comm: ip Tainted: G N 6.12.0-10313-g7d4050728c83-dirty #1
[ 10.648828] Tainted: [N]=TEST
[ 10.648879] Hardware name: riscv-virtio,qemu (DT)
[ 10.648978] Call Trace:
[ 10.649048] [<ffffffff80006d42>] dump_backtrace+0x1c/0x24
[ 10.649117] [<ffffffff80dc8d94>] show_stack+0x2c/0x38
[ 10.649180] [<ffffffff80de00b0>] dump_stack_lvl+0x74/0xac
[ 10.649246] [<ffffffff80de00fc>] dump_stack+0x14/0x1c
[ 10.649308] [<ffffffff8004da18>] __might_resched+0x23e/0x248
[ 10.649377] [<ffffffff8004da60>] __might_sleep+0x3e/0x62
[ 10.649441] [<ffffffff80b8d370>] napi_disable+0x24/0x10c
[ 10.649506] [<ffffffff809a06fe>] pcnet32_close+0x6c/0x126
...
This is due to might_sleep() at the beginning of napi_disable(). So it doesn't
work as intended, it just replaces one problem with another.
> Don: Would you mind taking a look to see if this change is sensible?
>
> Netdev maintainers: at a higher level, I'm not sure how many other
> drivers might have locking patterns like this that commit
> 86e25f40aa1e ("net: napi: Add napi_config") will break in a similar
> manner.
>
> Do I:
> - comb through drivers trying to identify these, and/or
Coccinelle, checking for napi_enable calls under spinlock, points to:
napi_enable called under spin_lock_irqsave from drivers/net/ethernet/via/via-velocity.c:2325
napi_enable called under spin_lock_irqsave from drivers/net/can/grcan.c:1076
napi_enable called under spin_lock from drivers/net/ethernet/marvell/mvneta.c:4388
napi_enable called under spin_lock_irqsave from drivers/net/ethernet/amd/pcnet32.c:2104
Guenter
> - do we find a way to implement the identified commit with the
> original lock ordering to avoid breaking any other driver?
>
> I'd appreciate guidance/insight from the maintainers on how to best
> proceed.
>
> diff --git a/drivers/net/ethernet/amd/pcnet32.c b/drivers/net/ethernet/amd/pcnet32.c
> index 72db9f9e7bee..ff56a308fec9 100644
> --- a/drivers/net/ethernet/amd/pcnet32.c
> +++ b/drivers/net/ethernet/amd/pcnet32.c
> @@ -2623,13 +2623,13 @@ static int pcnet32_close(struct net_device *dev)
> struct pcnet32_private *lp = netdev_priv(dev);
> unsigned long flags;
>
> + spin_lock_irqsave(&lp->lock, flags);
> +
> del_timer_sync(&lp->watchdog_timer);
>
> netif_stop_queue(dev);
> napi_disable(&lp->napi);
>
> - spin_lock_irqsave(&lp->lock, flags);
> -
> dev->stats.rx_missed_errors = lp->a->read_csr(ioaddr, 112);
>
> netif_printk(lp, ifdown, KERN_DEBUG, dev,
Powered by blists - more mailing lists