[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <877dk162mo.ffs@nanos.tec.linutronix.de>
Date: Fri, 14 May 2021 12:17:19 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: linux-usb@...r.kernel.org, netdev@...r.kernel.org
Cc: Michal Svec <msvec@...e.com>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Hayes Wang <hayeswang@...ltek.com>,
Thierry Reding <treding@...dia.com>,
Lee Jones <lee.jones@...aro.org>,
Borislav Petkov <bp@...en8.de>
Subject: [PATCH RFC] r8152: Ensure that napi_schedule() is handled
From: Thomas Gleixner <tglx@...utronix.de>
Date: Fri, 14 May 2021 11:46:08 +0200
The driver invokes napi_schedule() in several places from task
context. napi_schedule() raises the NET_RX softirq bit and relies on the
calling context to ensure that the softirq is handled. That's usually on
return from interrupt or on the outermost local_bh_enable().
But that's not the case here which causes the soft interrupt handling to be
delayed to the next interrupt or local_bh_enable(). If the task in which
context this is invoked is the last runnable task on a CPU and the CPU goes
idle before an interrupt arrives or a local_bh_disable/enable() pair
handles the pending soft interrupt then the NOHZ idle code emits the
following warning.
NOHZ tick-stop error: Non-RCU local softirq work is pending, handler #08!!!
Prevent this by wrapping the napi_schedule() invocation from task context
into a local_bh_disable/enable() pair.
Reported-by: Michal Svec <msvec@...e.com>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
---
Note: That's not the first incident of this. Shouldn't napi_schedule()
have a debug check (under lockdep) to catch this?
---
drivers/net/usb/r8152.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)
--- a/drivers/net/usb/r8152.c
+++ b/drivers/net/usb/r8152.c
@@ -1543,6 +1543,17 @@ void write_mii_word(struct net_device *n
r8152_mdio_write(tp, reg, val);
}
+/*
+ * Wrapper around napi_schedule() to ensure that the raised network softirq
+ * is actually handled.
+ */
+static void r8152_napi_schedule(struct napi_struct *napi)
+{
+ local_bh_disable();
+ napi_schedule(napi);
+ local_bh_enable();
+}
+
static int
r8152_submit_rx(struct r8152 *tp, struct rx_agg *agg, gfp_t mem_flags);
@@ -1753,7 +1764,7 @@ static void read_bulk_callback(struct ur
spin_lock_irqsave(&tp->rx_lock, flags);
list_add_tail(&agg->list, &tp->rx_done);
spin_unlock_irqrestore(&tp->rx_lock, flags);
- napi_schedule(&tp->napi);
+ r8152_napi_schedule(&tp->napi);
return;
case -ESHUTDOWN:
rtl_set_unplug(tp);
@@ -2640,7 +2651,7 @@ int r8152_submit_rx(struct r8152 *tp, st
netif_err(tp, rx_err, tp->netdev,
"Couldn't submit rx[%p], ret = %d\n", agg, ret);
- napi_schedule(&tp->napi);
+ r8152_napi_schedule(&tp->napi);
}
return ret;
@@ -8202,7 +8213,7 @@ static int rtl8152_post_reset(struct usb
usb_submit_urb(tp->intr_urb, GFP_KERNEL);
if (!list_empty(&tp->rx_done))
- napi_schedule(&tp->napi);
+ r8152_napi_schedule(&tp->napi);
return 0;
}
@@ -8256,7 +8267,7 @@ static int rtl8152_runtime_resume(struct
smp_mb__after_atomic();
if (!list_empty(&tp->rx_done))
- napi_schedule(&tp->napi);
+ r8152_napi_schedule(&tp->napi);
usb_submit_urb(tp->intr_urb, GFP_NOIO);
} else {
Powered by blists - more mailing lists