lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <LV3P223MB1042D6E8F26C131780375218F76C2@LV3P223MB1042.NAMP223.PROD.OUTLOOK.COM>
Date: Fri, 20 Sep 2024 14:22:16 -0400
From: Steven Davis <goldside000@...look.com>
To: ankur.a.arora@...cle.com
Cc: akpm@...ux-foundation.org,
	frederic@...nel.org,
	goldside000@...look.com,
	linux-kernel@...r.kernel.org,
	peterz@...radead.org,
	tglx@...utronix.de
Subject: Re: [PATCH] irq_work: Replace wait loop with rcuwait_wait_event

On Thu, 19 Sep 2024 at 20:10:42 -0700, Ankur Arora wrote:
> Frederic Weisbecker <frederic@...nel.org> writes:
>
>> Le Thu, Sep 19, 2024 at 11:43:26AM -0400, Steven Davis a écrit :
>>> The previous implementation of irq_work_sync used a busy-wait
>>> loop with cpu_relax() to check the status of IRQ work. This
>>> approach, while functional, could lead to inefficiencies in
>>> CPU usage.
>>>
>>> This commit replaces the busy-wait loop with the rcuwait_wait_event
>>> mechanism. This change leverages the RCU wait mechanism to handle
>>> waiting for IRQ work completion more efficiently, improving CPU
>>> responsiveness and reducing unnecessary CPU usage.
>>>
>>> Signed-off-by: Steven Davis <goldside000@...look.com>
>>> ---
>>>  kernel/irq_work.c | 3 +--
>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>
>>> diff --git a/kernel/irq_work.c b/kernel/irq_work.c
>>> index 2f4fb336dda1..2b092a1d07a9 100644
>>> --- a/kernel/irq_work.c
>>> +++ b/kernel/irq_work.c
>>> @@ -295,8 +295,7 @@ void irq_work_sync(struct irq_work *work)
>>>  		return;
>>>  	}
>>>
>>> -	while (irq_work_is_busy(work))
>>> -		cpu_relax();
>>> +	rcuwait_wait_event(&work->irqwait, !irq_work_is_busy(work), TASK_UNINTERRUPTIBLE);
>>
>> Dan Carpenter brought to my attention this a few weeks ago for another problem.:
>>
>> perf_remove_from_context() <- disables preempt
>> __perf_event_exit_context() <- disables preempt
>> -> __perf_remove_from_context()
>>    -> perf_group_detach()
>>       -> perf_put_aux_event()
>>          -> put_event()
>>             -> _free_event()
>>                -> irq_work_sync()
>
> irq_work_sync() is also annotated with might_sleep() (probably how Dan
> saw it) so in principle the rcuwait_wait_event() isn't wrong there.

The might_sleep() annotation does seem to indicate a preempt context. My main 
goal with this patch is to increase performance and responsiveness, and I believe
that rcu_wait_event() will do that. Let me know of any further improvements or 
drawbacks to this approach, though.

	Steven 

> --
> ankur

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ