[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201018192417.4055228-54-sashal@kernel.org>
Date: Sun, 18 Oct 2020 15:24:15 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Eli Billauer <eli.billauer@...il.com>,
Oliver Neukum <oneukum@...e.com>,
Alan Stern <stern@...land.harvard.edu>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Sasha Levin <sashal@...nel.org>, linux-usb@...r.kernel.org
Subject: [PATCH AUTOSEL 4.19 54/56] usb: core: Solve race condition in anchor cleanup functions
From: Eli Billauer <eli.billauer@...il.com>
[ Upstream commit fbc299437c06648afcc7891e6e2e6638dd48d4df ]
usb_kill_anchored_urbs() is commonly used to cancel all URBs on an
anchor just before releasing resources which the URBs rely on. By doing
so, users of this function rely on that no completer callbacks will take
place from any URB on the anchor after it returns.
However if this function is called in parallel with __usb_hcd_giveback_urb
processing a URB on the anchor, the latter may call the completer
callback after usb_kill_anchored_urbs() returns. This can lead to a
kernel panic due to use after release of memory in interrupt context.
The race condition is that __usb_hcd_giveback_urb() first unanchors the URB
and then makes the completer callback. Such URB is hence invisible to
usb_kill_anchored_urbs(), allowing it to return before the completer has
been called, since the anchor's urb_list is empty.
Even worse, if the racing completer callback resubmits the URB, it may
remain in the system long after usb_kill_anchored_urbs() returns.
Hence list_empty(&anchor->urb_list), which is used in the existing
while-loop, doesn't reliably ensure that all URBs of the anchor are gone.
A similar problem exists with usb_poison_anchored_urbs() and
usb_scuttle_anchored_urbs().
This patch adds an external do-while loop, which ensures that all URBs
are indeed handled before these three functions return. This change has
no effect at all unless the race condition occurs, in which case the
loop will busy-wait until the racing completer callback has finished.
This is a rare condition, so the CPU waste of this spinning is
negligible.
The additional do-while loop relies on usb_anchor_check_wakeup(), which
returns true iff the anchor list is empty, and there is no
__usb_hcd_giveback_urb() in the system that is in the middle of the
unanchor-before-complete phase. The @suspend_wakeups member of
struct usb_anchor is used for this purpose, which was introduced to solve
another problem which the same race condition causes, in commit
6ec4147e7bdb ("usb-anchor: Delay usb_wait_anchor_empty_timeout wake up
till completion is done").
The surely_empty variable is necessary, because usb_anchor_check_wakeup()
must be called with the lock held to prevent races. However the spinlock
must be released and reacquired if the outer loop spins with an empty
URB list while waiting for the unanchor-before-complete passage to finish:
The completer callback may very well attempt to take the very same lock.
To summarize, using usb_anchor_check_wakeup() means that the patched
functions can return only when the anchor's list is empty, and there is
no invisible URB being processed. Since the inner while loop finishes on
the empty list condition, the new do-while loop will terminate as well,
except for when the said race condition occurs.
Signed-off-by: Eli Billauer <eli.billauer@...il.com>
Acked-by: Oliver Neukum <oneukum@...e.com>
Acked-by: Alan Stern <stern@...land.harvard.edu>
Link: https://lore.kernel.org/r/20200731054650.30644-1-eli.billauer@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
drivers/usb/core/urb.c | 89 +++++++++++++++++++++++++-----------------
1 file changed, 54 insertions(+), 35 deletions(-)
diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c
index 5e844097a9e30..3cd7732c086e0 100644
--- a/drivers/usb/core/urb.c
+++ b/drivers/usb/core/urb.c
@@ -773,11 +773,12 @@ void usb_block_urb(struct urb *urb)
EXPORT_SYMBOL_GPL(usb_block_urb);
/**
- * usb_kill_anchored_urbs - cancel transfer requests en masse
+ * usb_kill_anchored_urbs - kill all URBs associated with an anchor
* @anchor: anchor the requests are bound to
*
- * this allows all outstanding URBs to be killed starting
- * from the back of the queue
+ * This kills all outstanding URBs starting from the back of the queue,
+ * with guarantee that no completer callbacks will take place from the
+ * anchor after this function returns.
*
* This routine should not be called by a driver after its disconnect
* method has returned.
@@ -785,20 +786,26 @@ EXPORT_SYMBOL_GPL(usb_block_urb);
void usb_kill_anchored_urbs(struct usb_anchor *anchor)
{
struct urb *victim;
+ int surely_empty;
- spin_lock_irq(&anchor->lock);
- while (!list_empty(&anchor->urb_list)) {
- victim = list_entry(anchor->urb_list.prev, struct urb,
- anchor_list);
- /* we must make sure the URB isn't freed before we kill it*/
- usb_get_urb(victim);
- spin_unlock_irq(&anchor->lock);
- /* this will unanchor the URB */
- usb_kill_urb(victim);
- usb_put_urb(victim);
+ do {
spin_lock_irq(&anchor->lock);
- }
- spin_unlock_irq(&anchor->lock);
+ while (!list_empty(&anchor->urb_list)) {
+ victim = list_entry(anchor->urb_list.prev,
+ struct urb, anchor_list);
+ /* make sure the URB isn't freed before we kill it */
+ usb_get_urb(victim);
+ spin_unlock_irq(&anchor->lock);
+ /* this will unanchor the URB */
+ usb_kill_urb(victim);
+ usb_put_urb(victim);
+ spin_lock_irq(&anchor->lock);
+ }
+ surely_empty = usb_anchor_check_wakeup(anchor);
+
+ spin_unlock_irq(&anchor->lock);
+ cpu_relax();
+ } while (!surely_empty);
}
EXPORT_SYMBOL_GPL(usb_kill_anchored_urbs);
@@ -817,21 +824,27 @@ EXPORT_SYMBOL_GPL(usb_kill_anchored_urbs);
void usb_poison_anchored_urbs(struct usb_anchor *anchor)
{
struct urb *victim;
+ int surely_empty;
- spin_lock_irq(&anchor->lock);
- anchor->poisoned = 1;
- while (!list_empty(&anchor->urb_list)) {
- victim = list_entry(anchor->urb_list.prev, struct urb,
- anchor_list);
- /* we must make sure the URB isn't freed before we kill it*/
- usb_get_urb(victim);
- spin_unlock_irq(&anchor->lock);
- /* this will unanchor the URB */
- usb_poison_urb(victim);
- usb_put_urb(victim);
+ do {
spin_lock_irq(&anchor->lock);
- }
- spin_unlock_irq(&anchor->lock);
+ anchor->poisoned = 1;
+ while (!list_empty(&anchor->urb_list)) {
+ victim = list_entry(anchor->urb_list.prev,
+ struct urb, anchor_list);
+ /* make sure the URB isn't freed before we kill it */
+ usb_get_urb(victim);
+ spin_unlock_irq(&anchor->lock);
+ /* this will unanchor the URB */
+ usb_poison_urb(victim);
+ usb_put_urb(victim);
+ spin_lock_irq(&anchor->lock);
+ }
+ surely_empty = usb_anchor_check_wakeup(anchor);
+
+ spin_unlock_irq(&anchor->lock);
+ cpu_relax();
+ } while (!surely_empty);
}
EXPORT_SYMBOL_GPL(usb_poison_anchored_urbs);
@@ -971,14 +984,20 @@ void usb_scuttle_anchored_urbs(struct usb_anchor *anchor)
{
struct urb *victim;
unsigned long flags;
+ int surely_empty;
+
+ do {
+ spin_lock_irqsave(&anchor->lock, flags);
+ while (!list_empty(&anchor->urb_list)) {
+ victim = list_entry(anchor->urb_list.prev,
+ struct urb, anchor_list);
+ __usb_unanchor_urb(victim, anchor);
+ }
+ surely_empty = usb_anchor_check_wakeup(anchor);
- spin_lock_irqsave(&anchor->lock, flags);
- while (!list_empty(&anchor->urb_list)) {
- victim = list_entry(anchor->urb_list.prev, struct urb,
- anchor_list);
- __usb_unanchor_urb(victim, anchor);
- }
- spin_unlock_irqrestore(&anchor->lock, flags);
+ spin_unlock_irqrestore(&anchor->lock, flags);
+ cpu_relax();
+ } while (!surely_empty);
}
EXPORT_SYMBOL_GPL(usb_scuttle_anchored_urbs);
--
2.25.1
Powered by blists - more mailing lists