[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <YitkzkjU5zng7jAM@linutronix.de>
Date: Fri, 11 Mar 2022 16:03:42 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: netdev@...r.kernel.org
Cc: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>
Subject: [PATCH net-next] net: Add lockdep asserts to ____napi_schedule().
____napi_schedule() needs to be invoked with disabled interrupts due to
__raise_softirq_irqoff (in order not to corrupt the per-CPU list).
____napi_schedule() needs also to be invoked from an interrupt context
so that the raised-softirq is processed while the interrupt context is
left.
Add lockdep asserts for both conditions.
While this is the second time the irq/softirq check is needed, provide a
generic lockdep_assert_softirq_will_run() which is used by both caller.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
---
This is my todo from
https://lkml.kernel.org/r/20220203170901.52ccfd09@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com
- It was suggested to add this to __napi_schedule(),
__napi_schedule_irqoff but both end up here in ____napi_schedule().
- It was suggested to do lockdep_assert_softirq_will_run(). Done plus
moved the other caller.
While adding this, I stumbled over lockdep_assert_in_softirq(). This
really special casing things and builds upon assumptions. It took me a
while to figure what is going on. I would suggest to remove it and as a
replacement add lockdep annotations (as in spin_acquire()) around
`napi_alloc_cache' access which is the thing the annotation cares about.
And then the lockdep_assert_in_softirq() can be replaced with a
might_lock() so in the end we know why do what we do. Lockdep will yell
if the lock has been observed in-hardirq and in-softirq without
disabling interrupts.
Sounds good?
---
include/linux/lockdep.h | 7 +++++++
net/core/dev.c | 5 ++++-
2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 467b94257105e..0cc65d2167015 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -329,6 +329,12 @@ extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie);
#define lockdep_assert_none_held_once() \
lockdep_assert_once(!current->lockdep_depth)
+/*
+ * Ensure that softirq is handled within the callchain and not delayed and
+ * handled by chance.
+ */
+#define lockdep_assert_softirq_will_run() \
+ lockdep_assert_once(hardirq_count() | softirq_count())
#define lockdep_recursing(tsk) ((tsk)->lockdep_recursion)
@@ -414,6 +420,7 @@ extern int lockdep_is_held(const void *);
#define lockdep_assert_held_read(l) do { (void)(l); } while (0)
#define lockdep_assert_held_once(l) do { (void)(l); } while (0)
#define lockdep_assert_none_held_once() do { } while (0)
+#define lockdep_assert_softirq_will_run() do { } while (0)
#define lockdep_recursing(tsk) (0)
diff --git a/net/core/dev.c b/net/core/dev.c
index ba69ddf85af6b..dbda85879f6c1 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4267,6 +4267,9 @@ static inline void ____napi_schedule(struct softnet_data *sd,
{
struct task_struct *thread;
+ lockdep_assert_softirq_will_run();
+ lockdep_assert_irqs_disabled();
+
if (test_bit(NAPI_STATE_THREADED, &napi->state)) {
/* Paired with smp_mb__before_atomic() in
* napi_enable()/dev_set_threaded().
@@ -4874,7 +4877,7 @@ int __netif_rx(struct sk_buff *skb)
{
int ret;
- lockdep_assert_once(hardirq_count() | softirq_count());
+ lockdep_assert_softirq_will_run();
trace_netif_rx_entry(skb);
ret = netif_rx_internal(skb);
--
2.35.1
Powered by blists - more mailing lists