lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <79960c2a-26ea-4472-bebb-4657fcca2255@intel.com>
Date: Tue, 3 Feb 2026 15:47:18 -0800
From: Jacob Keller <jacob.e.keller@...el.com>
To: Petr Oros <poros@...hat.com>, Przemek Kitszel
	<przemyslaw.kitszel@...el.com>, Jakub Kicinski <kuba@...nel.org>
CC: <ivecera@...hat.com>, <netdev@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>, Andrew Lunn <andrew+netdev@...n.ch>, "Eric
 Dumazet" <edumazet@...gle.com>, Stanislav Fomichev <sdf@...ichev.me>, "Tony
 Nguyen" <anthony.l.nguyen@...el.com>, <intel-wired-lan@...ts.osuosl.org>,
	Paolo Abeni <pabeni@...hat.com>, "David S. Miller" <davem@...emloft.net>
Subject: Re: [Intel-wired-lan] [PATCH net] iavf: fix deadlock in reset
 handling



On 2/3/2026 3:32 AM, Petr Oros wrote:
> 
> On 2/3/26 11:19, Przemek Kitszel wrote:
>> On 2/3/26 09:44, Petr Oros wrote:
>>>
>>> On 2/3/26 02:00, Jacob Keller wrote:
>>>>
>>>>
>>>> On 2/2/2026 3:58 PM, Jakub Kicinski wrote:
>>>>> On Mon,  2 Feb 2026 09:48:20 +0100 Petr Oros wrote:
>>>>>> +    netdev_unlock(netdev);
>>>>>> +    ret = wait_event_interruptible_timeout(adapter->reset_waitqueue,
>>>>>> + !iavf_is_reset_in_progress(adapter),
>>>>>> +                           msecs_to_jiffies(5000));
>>>>>> +    netdev_lock(netdev);
>>>>>
>>>>> Dropping locks taken by the core around the driver callback
>>>>> is obviously unacceptable. SMH.
>>>>
>>>> Right. It seems like the correct fix is to either a) have reset take 
>>>> and hold the netdev lock (now that its distinct from the global RTNL 
>>>> lock) or b) refactor reset so that it can defer any of the netdev 
>>>> related stuff somehow.
>>>>
>>> I modeled this after the existing pattern in iavf_close() (ndo_stop), 
>>> which also temporarily releases the netdev instance lock taken by the 
>>> core to wait for an async operation to complete:
>>
>> First of all, thank you for working on that, I was hit by the very same
>> problem (no series yet), but my local fix is the same as of now.
>>
>> I don't see an easy fix (w/o substantial driver refactor).
>>
>>>
>>> static int iavf_close(struct net_device *netdev)
>>> {
>>>          netdev_assert_locked(netdev);
>>>          ...
>>>          iavf_down(adapter);
>>>          iavf_change_state(adapter, __IAVF_DOWN_PENDING);
>>>          iavf_free_traffic_irqs(adapter);
>>>
>>>          netdev_unlock(netdev);
>>>
>>>          status = wait_event_timeout(adapter->down_waitqueue,
>>>                                      adapter->state == __IAVF_DOWN,
>>>                                      msecs_to_jiffies(500));
>>>          if (!status)
>>>                  netdev_warn(netdev, "Device resources not yet 
>>> released\n");
>>>          netdev_lock(netdev);
>>>          ...
>>> }
>>>
>>> This was introduced by commit 120f28a6f314fe ("iavf: get rid of the 
>>> crit lock"), and ndo_stop is called with netdev instance lock held by 
>>> the core just like ndo_change_mtu is. 
>>
>> technically it was introduced by commmit afc664987ab3 ("eth: iavf:
>> extend the netdev_lock usage")
>>
>>> Could you clarify why the unlock-wait- lock pattern is acceptable in 
>>> ndo_stop but not here?
>>>

It may simply be that no one spotted it in ndo_stop.

>>
>> perhaps just closing netdev is a special kind of operation
>>
>> Other thing is that the lock was added to allow further NAPI
>> development, and one silly driver should not stop that effort.
>> Sadly, we have not managed to re-design the driver yet. I would like to
>> do so personally, but have much work accumulated/pending to free my time
>>
> I agree, the unlock-wait-lock pattern is fundamentally flawed (I now 
> understand
> why it is unacceptable) and should be avoided.
> > What can we do now?
> 
> * Eliminating the wait is not an option: As noted in the description of 
> commit
> c2ed2403f12c, this wait was originally added to fix a race condition where
> adding an interface to bonding failed because the device remained in
> __RESETTING state after the callback returned.
> * Passing the lock into reset is impractical: The reset path is 
> triggered from
> numerous contexts, many of which are not under the netdev_lock, making this
> even more complex than a full refactor.
Hm. I was thinking we could just hold the netdev lock for the reset, 
but... we already take the netdev lock there as of commit ef490bbb2267 
("iavf: Add net_shaper_ops support").

That's actually what causes the deadlock: prior to that change reset 
didn't hold the netdev lock for the duration. This is what gets us 
stuck. We are waiting for reset to finish while holding the lock that 
blocks the reset.

But I am not really sure how to fix this: We want change MTU to only 
return once reset is complete.. but reset is dependent on the very lock 
that we're holding. And there's no way to communicate this fact to the 
reset handler...


> 
> If dropping the lock is a no-go, the only viable path forward is to 
> split the
> reset_task so that the waiting portion is decoupled from the netdev_lock
> critical section.
> 
We used to do this back before the netdev shaper ops. We didn't acquire 
either netdev lock or RTNL during reset.

I thought we had some code in the past that would handle netdev stuff 
outside of reset.. but I don't really know and git blame is not making 
it easy to find this information.

Perhaps we don't actually need to hold the netdev lock over the reset 
task.. except Przemek's refactor to remove the critical lock now makes 
us fully dependent on the netdev lock in this case for reset :(

> The fact remains that MTU configuration and ring parameter changes are
> currently broken in iavf. Changing the MTU on a Virtual Function is a
> fundamental configuration not an obscure edge case that can remain non- 
> functional.
> 

Agreed. This needs a resolution. It is just very tricky to figure out 
what the solution should be.

We need to hold the netdev lock during reset, and we need to have our 
handlers wait for reset to complete in order to be certain their task is 
done... but reset is a separate thread so we can't really communicate to 
it that we're holding the lock, and attempts to do that would be a huge 
problem.

We don't want to go back to the critical lock and all of its horrible 
problems either. The commit that removed it is here: 120f28a6f314 
("iavf: get rid of the crit lock")

> I would appreciate any further guidance on how you would prefer...
> 

I wish I had some better ideas..

Bad ideas I've thought about so far:

1) this patch with its drop lock and wait, which we discussed as 
problematic before. It creates a lot of issues since it means the 
operations are no longer atomic and we could potentially get stuck with 
some other operation in the event of another thread starting some core 
netdev task. No good.

2) not holding netdev_lock in reset, which is now a nogo since we 
removed the crit_lock, and apparently we held netdev_lock prior to that 
too anyways...

3) we could maybe do some sort of ref counting dance where we take some 
reference in threads that queue a reset, and the reset task would know 
if that reference was non-zero then another driver thread is holding 
netdev_lock so its safe to go into reset without locking... but this 
feels extremely ugly and error prone to me...

4) convert reset handling to a separate function that depends on the 
netdev_lock, and call that directly from within the threads that 
currently "wait for reset" while holding the netdev lock. Thus, we 
basically move this entire call chain into the thread already holding 
the lock, and call it from the context of the function like the MTU 
change, etc. This feels like its also a huge issue, and could 
potentially lead to some sort of issue where we need to still block the 
reset thread from going if we reset at the end of the netdev_lock thread..

I don't really like any of these solutions, even if (3) and (4) aren't 
fully ruled out as completely broken... they probably have all kinds of 
issues...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ