[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID:
<SJ2P223MB10263844181902531B671FB6F7622@SJ2P223MB1026.NAMP223.PROD.OUTLOOK.COM>
Date: Wed, 18 Sep 2024 11:23:17 -0400
From: Steven Davis <goldside000@...look.com>
To: akpm@...ux-foundation.org
Cc: linux-kernel@...r.kernel.org,
Steven Davis <goldside000@...look.com>
Subject: [PATCH] irq_work: Improve CPU Responsiveness in irq_work_sync with cond_resched()
Add cond_resched() to the busy-wait loop in irq_work_sync to improve
CPU responsiveness and prevent starvation of other tasks.
Previously, the busy-wait loop used cpu_relax() alone, which, while
reducing power consumption, could still lead to excessive CPU
monopolization in scenarios where IRQ work remains busy for extended
periods. By incorporating cond_resched(), the CPU is periodically yielded
to the scheduler, allowing other tasks to execute and enhancing overall
system responsiveness.
Signed-off-by: Steven Davis <goldside000@...look.com>
---
kernel/irq_work.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index 2f4fb336dda1..bdc478979ee6 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -295,9 +295,17 @@ void irq_work_sync(struct irq_work *work)
return;
}
- while (irq_work_is_busy(work))
+ int retry_count = 0;
+
+ while (irq_work_is_busy(work)) {
cpu_relax();
+
+ if (retry_count++ > 1000) {
+ cond_resched();
+ retry_count = 0;
+ }
}
+
EXPORT_SYMBOL_GPL(irq_work_sync);
static void run_irq_workd(unsigned int cpu)
--
2.39.5
Powered by blists - more mailing lists