[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAP=Rh=OnQJ2O93GaJQdDXF9W6ft7sEA2QOY7ais8NAJaXH2V5Q@mail.gmail.com>
Date: Mon, 26 May 2025 17:24:04 +0800
From: John <john.cs.hey@...il.com>
To: "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>
Cc: Simon Horman <horms@...nel.org>, netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [Bug] "possible deadlock in rtnl_setlink" in Linux kernel v6.14
Dear Linux Kernel Maintainers,
I hope this message finds you well.
I am writing to report a potential vulnerability I encountered during
testing of the Linux Kernel version v6.14.
Git Commit: 38fec10eb60d687e30c8c6b5420d86e8149f7557 (tag: v6.14)
Bug Location: rtnl_setlink+0x32e/0x6f0 net/core/rtnetlink.c:3420
Bug report: https://hastebin.com/share/nohususeku.bash
Complete log: https://hastebin.com/share/fijazefoci.perl
Entire kernel config: https://hastebin.com/share/qonequpodu.ini
Root Cause Analysis:
The kernel lockdep subsystem issued a warning about a potential
circular locking dependency between rtnl_mutex and the
e1000_reset_task workqueue lock.
The warning was triggered during a rtnl_setlink() call, which holds
rtnl_mutex and later invokes flush_work() on the E1000 adapter’s
reset_task.
The problematic dependency arises because rtnl_mutex is held while
calling __flush_work(&adapter->reset_task), but the reset_task
workqueue handler (e1000_reset_task) can also acquire rtnl_mutex
internally.
This creates a cycle in lock ordering, where task A holds rtnl_mutex
and waits for work, while task B (the worker thread) holds the work
lock and tries to acquire rtnl_mutex.
At present, I have not yet obtained a minimal reproducer for this
issue. However, I am actively working on reproducing it, and I will
promptly share any additional findings or a working reproducer as soon
as it becomes available.
Thank you very much for your time and attention to this matter. I
truly appreciate the efforts of the Linux kernel community.
Best regards,
John
Powered by blists - more mailing lists