[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOPYjvaOBke7QVqAwbxOGyuVVb2hQGi3t-yiN7P=4sK-Mt-+Dg@mail.gmail.com>
Date: Tue, 25 Feb 2025 21:02:00 +0800
From: Gui-Dong Han <hanguidong02@...il.com>
To: Larry.Finger@...inger.net, phil@...lpotter.co.uk, paskripkin@...il.com,
Greg KH <gregkh@...uxfoundation.org>
Cc: linux-staging@...ts.linux.dev, LKML <linux-kernel@...r.kernel.org>,
baijiaju1990@...il.com, stable@...r.kernel.org
Subject: [BUG] r8188eu: Potential deadlocks in rtw_wx_set_wap/essid functions
Hello maintainers,
I would like to report a potential lock ordering issue in the r8188eu
driver. This may lead to deadlocks under certain conditions.
The functions rtw_wx_set_wap() and rtw_wx_set_essid() acquire locks in
an order that contradicts the established locking hierarchy observed
in other parts of the driver:
1. They first take &pmlmepriv->scanned_queue.lock
2. Then call rtw_set_802_11_infrastructure_mode() which takes &pmlmepriv->lock
This is inverted compared to the common pattern seen in functions like
rtw_joinbss_event_prehandle(), rtw_createbss_cmd_callback(), and
others, which typically:
1. Take &pmlmepriv->lock first
2. Then take &pmlmepriv->scanned_queue.lock
This lock inversion creates a potential deadlock scenario when these
code paths execute concurrently.
Moreover, the call chain: rtw_wx_set_* ->
rtw_set_802_11_infrastructure_mode() -> rtw_free_assoc_resources()
could lead to recursive acquisition of &pmlmepriv->scanned_queue.lock,
potentially causing self-deadlock even without concurrency.
This issue exists in longterm kernels containing the r8188eu driver:
5.4.y (until 5.4.290)
5.10.y (until 5.10.234)
5.15.y (until 5.15.178)
6.1.y (until 6.1.129)
The r8188eu driver has been removed from upstream, but older
maintained versions (5.4.x–6.1.x) still include this driver and are
affected.
This issue was identified through static analysis. While I've verified
the locking patterns through code review, I'm not sufficiently
familiar with the driver's internals to propose a safe fix.
Thank you for your attention to this matter.
Best regards,
Gui-Dong Han
Powered by blists - more mailing lists