[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240427102808.29356-1-qiang.zhang1211@gmail.com>
Date: Sat, 27 Apr 2024 18:28:08 +0800
From: Zqiang <qiang.zhang1211@...il.com>
To: paulmck@...nel.org,
tglx@...utronix.de,
frederic@...nel.org,
neeraj.upadhyay@...nel.org,
joel@...lfernandes.org
Cc: qiang.zhang1211@...il.com,
rcu@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH] rcu: Fix suspicious RCU usage in __do_softirq()
Currently, the condition "__this_cpu_read(ksoftirqd) == current" is
checked to ensure the rcu_softirq_qs() is invoked in ksoftirqd tasks
context for non-RT kernels. however, in some scenarios, this condition
will be broken.
ksoftirqd/0
->finish_task_switch
->put_task_struct_rcu_user
->call_rcu(&task->rcu, delayed_put_task_struct)
->__kasan_record_aux_stack
->pfn_valid
->rcu_read_lock_sched()
<interrupt>
__irq_exit_rcu
->__do_softirq
-> if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
__this_cpu_read(ksoftirqd) == current)
->rcu_softirq_qs
->RCU_LOCKDEP_WARN(lock_is_held(&rcu_sched_lock_map))
The rcu quiescent states is reported occurs in the rcu-read critical
section, so the lockdep warning is triggered. this commit therefore
remove "__this_cpu_read(ksoftirqd) == current" condition check, generate
new "handle_softirqs(bool kirqd)" function to replace __do_softirq() in
run_ksoftirqdt(), and set parameter kirqd to true, make rcu_softirq_qs()
be invoked only in ksofirqd tasks context for non-RT kernels.
Reported-by: syzbot+dce04ed6d1438ad69656@...kaller.appspotmail.com
Link: https://lore.kernel.org/lkml/8f281a10-b85a-4586-9586-5bbc12dc784f@paulmck-laptop/T/#mea8aba4abfcb97bbf499d169ce7f30c4cff1b0e3
Suggested-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Zqiang <qiang.zhang1211@...il.com>
---
kernel/softirq.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index b315b21fb28c..e991d735be0d 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -508,7 +508,7 @@ static inline bool lockdep_softirq_start(void) { return false; }
static inline void lockdep_softirq_end(bool in_hardirq) { }
#endif
-asmlinkage __visible void __softirq_entry __do_softirq(void)
+static void handle_softirqs(bool kirqd)
{
unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
unsigned long old_flags = current->flags;
@@ -563,8 +563,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
pending >>= softirq_bit;
}
- if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
- __this_cpu_read(ksoftirqd) == current)
+ if (!IS_ENABLED(CONFIG_PREEMPT_RT) && kirqd)
rcu_softirq_qs();
local_irq_disable();
@@ -584,6 +583,11 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
current_restore_flags(old_flags, PF_MEMALLOC);
}
+asmlinkage __visible void __softirq_entry __do_softirq(void)
+{
+ handle_softirqs(false);
+}
+
/**
* irq_enter_rcu - Enter an interrupt context with RCU watching
*/
@@ -921,7 +925,7 @@ static void run_ksoftirqd(unsigned int cpu)
* We can safely run softirq on inline stack, as we are not deep
* in the task stack here.
*/
- __do_softirq();
+ handle_softirqs(true);
ksoftirqd_run_end();
cond_resched();
return;
--
2.17.1
Powered by blists - more mailing lists