[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<SJ2P223MB1026771CE6171D840DF29768F7632@SJ2P223MB1026.NAMP223.PROD.OUTLOOK.COM>
Date: Thu, 19 Sep 2024 11:25:23 -0400
From: Steven Davis <goldside000@...look.com>
To: tglx@...utronix.de
Cc: akpm@...ux-foundation.org,
ankur.a.arora@...cle.com,
frederic@...nel.org,
goldside000@...look.com,
linux-kernel@...r.kernel.org,
peterz@...radead.org
Subject: Re: [PATCH] irq_work: Improve CPU Responsiveness in irq_work_sync with cond_resched()
On Thu, 19 Sep 2024 at 15:54:21 +0200, Thomas Gleixner wrote:
> On Wed, Sep 18 2024 at 11:23, Steven Davis wrote:
>> Add cond_resched() to the busy-wait loop in irq_work_sync to improve
>> CPU responsiveness and prevent starvation of other tasks.
>>
>> Previously, the busy-wait loop used cpu_relax() alone, which, while
>> reducing power consumption, could still lead to excessive CPU
>> monopolization in scenarios where IRQ work remains busy for extended
>> periods. By incorporating cond_resched(), the CPU is periodically yielded
>> to the scheduler, allowing other tasks to execute and enhancing overall
>> system responsiveness.
>>
>> - while (irq_work_is_busy(work))
>> + int retry_count = 0;
>> +
>> + while (irq_work_is_busy(work)) {
>> cpu_relax();
>> +
>> + if (retry_count++ > 1000) {
>> + cond_resched();
>> + retry_count = 0;
>> + }
>
> Did you verify that all callers are actually calling from preemptible
> context?
Yes, I reviewed the 11 calls to irq_work_sync, and they all seem to
occur in preemptible contexts.
> If so, then we should just get rid of the loop waiting completely and
> use the rcu_wait mechanism which RT uses.
Will do that soon. Thanks for the suggestion.
Steven
Powered by blists - more mailing lists