lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZvHfoXAfGXIZPaeG@pavilion.home>
Date: Mon, 23 Sep 2024 23:37:37 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: Ankur Arora <ankur.a.arora@...cle.com>
Cc: Steven Davis <goldside000@...look.com>, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, peterz@...radead.org,
	tglx@...utronix.de
Subject: Re: [PATCH] irq_work: Replace wait loop with rcuwait_wait_event

Le Sun, Sep 22, 2024 at 11:35:11PM -0700, Ankur Arora a écrit :
> 
> Steven Davis <goldside000@...look.com> writes:
> 
> > On Thu, 19 Sep 2024 at 20:10:42 -0700, Ankur Arora wrote:
> >> Frederic Weisbecker <frederic@...nel.org> writes:
> >>
> >>> Le Thu, Sep 19, 2024 at 11:43:26AM -0400, Steven Davis a écrit :
> >>>> The previous implementation of irq_work_sync used a busy-wait
> >>>> loop with cpu_relax() to check the status of IRQ work. This
> >>>> approach, while functional, could lead to inefficiencies in
> >>>> CPU usage.
> >>>>
> >>>> This commit replaces the busy-wait loop with the rcuwait_wait_event
> >>>> mechanism. This change leverages the RCU wait mechanism to handle
> >>>> waiting for IRQ work completion more efficiently, improving CPU
> >>>> responsiveness and reducing unnecessary CPU usage.
> >>>>
> >>>> Signed-off-by: Steven Davis <goldside000@...look.com>
> >>>> ---
> >>>>  kernel/irq_work.c | 3 +--
> >>>>  1 file changed, 1 insertion(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/kernel/irq_work.c b/kernel/irq_work.c
> >>>> index 2f4fb336dda1..2b092a1d07a9 100644
> >>>> --- a/kernel/irq_work.c
> >>>> +++ b/kernel/irq_work.c
> >>>> @@ -295,8 +295,7 @@ void irq_work_sync(struct irq_work *work)
> >>>>  		return;
> >>>>  	}
> >>>>
> >>>> -	while (irq_work_is_busy(work))
> >>>> -		cpu_relax();
> >>>> +	rcuwait_wait_event(&work->irqwait, !irq_work_is_busy(work), TASK_UNINTERRUPTIBLE);
> >>>
> >>> Dan Carpenter brought to my attention this a few weeks ago for another problem.:
> >>>
> >>> perf_remove_from_context() <- disables preempt
> >>> __perf_event_exit_context() <- disables preempt
> >>> -> __perf_remove_from_context()
> >>>    -> perf_group_detach()
> >>>       -> perf_put_aux_event()
> >>>          -> put_event()
> >>>             -> _free_event()
> >>>                -> irq_work_sync()
> >>
> >> irq_work_sync() is also annotated with might_sleep() (probably how Dan
> >> saw it) so in principle the rcuwait_wait_event() isn't wrong there.
> >
> > The might_sleep() annotation does seem to indicate a preempt context.
> 
> Yeah, and that is one of the problems with might_sleep(), cond_resched()
> etc. They can get out of sync with the surrounding code or code paths.
> 
> In this case, this could might mean that maybe the annotation is wrong
> and should be removed. So, it might be worth looking at the calling
> paths a bit closer to see if calling schedule() is really safe in all
> of these contexts.
> 
> And, if all the other contexts are safe, then it would be a good idea
> to fix the perf_remove_from_context() since with this patch, we might
> end up scheduling in spinlock context.

After a second look, I think this path is fine. If an aux event is
being unlinked, this means that it hasn't been removed from the context
yet. So it can't be the last reference to it causing a call to _free_event().

So it _looks_ good after all.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ