[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ikurhvf1.fsf@oracle.com>
Date: Thu, 19 Sep 2024 20:10:42 -0700
From: Ankur Arora <ankur.a.arora@...cle.com>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: Steven Davis <goldside000@...look.com>, akpm@...ux-foundation.org,
ankur.a.arora@...cle.com, linux-kernel@...r.kernel.org,
peterz@...radead.org, tglx@...utronix.de
Subject: Re: [PATCH] irq_work: Replace wait loop with rcuwait_wait_event
Frederic Weisbecker <frederic@...nel.org> writes:
> Le Thu, Sep 19, 2024 at 11:43:26AM -0400, Steven Davis a écrit :
>> The previous implementation of irq_work_sync used a busy-wait
>> loop with cpu_relax() to check the status of IRQ work. This
>> approach, while functional, could lead to inefficiencies in
>> CPU usage.
>>
>> This commit replaces the busy-wait loop with the rcuwait_wait_event
>> mechanism. This change leverages the RCU wait mechanism to handle
>> waiting for IRQ work completion more efficiently, improving CPU
>> responsiveness and reducing unnecessary CPU usage.
>>
>> Signed-off-by: Steven Davis <goldside000@...look.com>
>> ---
>> kernel/irq_work.c | 3 +--
>> 1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/kernel/irq_work.c b/kernel/irq_work.c
>> index 2f4fb336dda1..2b092a1d07a9 100644
>> --- a/kernel/irq_work.c
>> +++ b/kernel/irq_work.c
>> @@ -295,8 +295,7 @@ void irq_work_sync(struct irq_work *work)
>> return;
>> }
>>
>> - while (irq_work_is_busy(work))
>> - cpu_relax();
>> + rcuwait_wait_event(&work->irqwait, !irq_work_is_busy(work), TASK_UNINTERRUPTIBLE);
>
> Dan Carpenter brought to my attention this a few weeks ago for another problem.:
>
> perf_remove_from_context() <- disables preempt
> __perf_event_exit_context() <- disables preempt
> -> __perf_remove_from_context()
> -> perf_group_detach()
> -> perf_put_aux_event()
> -> put_event()
> -> _free_event()
> -> irq_work_sync()
irq_work_sync() is also annotated with might_sleep() (probably how Dan
saw it) so in principle the rcuwait_wait_event() isn't wrong there.
--
ankur
Powered by blists - more mailing lists