lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251109114500.GC2545891@noisy.programming.kicks-ass.net>
Date: Sun, 9 Nov 2025 12:45:00 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Liangyan <liangyan.peng@...edance.com>
Cc: mingo@...hat.com, acme@...nel.org, namhyung@...nel.org,
	mark.rutland@....com, alexander.shishkin@...ux.intel.com,
	jolsa@...nel.org, irogers@...gle.com, adrian.hunter@...el.com,
	james.clark@...aro.org, bigeasy@...utronix.de,
	zengxianjun@...edance.com, linux-perf-users@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf/core: Fix pending work re-queued in
 __perf_event_overflow

On Sun, Nov 09, 2025 at 06:32:53PM +0800, Liangyan wrote:
> A race condition occurs between task context and IRQ context when
> handling sigtrap tracepoint event overflows:
> 
> 1. In task context, an event is overflowed and its pending work is
>    queued to task->task_works
> 2. Before pending_work is set, the same event overflows in IRQ context
> 3. Both contexts queue the same perf pending work to task->task_works
> 
> This double queuing causes:
> - task_work_run() enters infinite loop calling perf_pending_task()
> - Potential warnings and use-after-free when event is freed in
> perf_pending_task()
> 
> Fix the race by disabling interrupts during queuing of perf pending work.



> Fixes: c5d93d23a260 ("perf: Enqueue SIGTRAP always via task_work.")
> Reported-by: Xianjun Zeng <zengxianjun@...edance.com>
> Signed-off-by: Liangyan <liangyan.peng@...edance.com>
> ---
>  kernel/events/core.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index cae921f4d137..6c35a129f185 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -10427,12 +10427,14 @@ static int __perf_event_overflow(struct perf_event *event,
>  		bool valid_sample = sample_is_allowed(event, regs);
>  		unsigned int pending_id = 1;
>  		enum task_work_notify_mode notify_mode;
> +		unsigned long flags;
>  
>  		if (regs)
>  			pending_id = hash32_ptr((void *)instruction_pointer(regs)) ?: 1;
>  
>  		notify_mode = in_nmi() ? TWA_NMI_CURRENT : TWA_RESUME;
>  
> +		local_irq_save(flags);

This could be written as:

		/*
		 * Comment that explains why we need to disable IRQs.
		 */
		guard(irqsave)();

>  		if (!event->pending_work &&
>  		    !task_work_add(current, &event->pending_task, notify_mode)) {
>  			event->pending_work = pending_id;
> @@ -10458,6 +10460,7 @@ static int __perf_event_overflow(struct perf_event *event,
>  			 */
>  			WARN_ON_ONCE(event->pending_work != pending_id);
>  		}
> +		local_irq_restore(flags);
>  	}
>  
>  	READ_ONCE(event->overflow_handler)(event, data, regs);
> -- 
> 2.39.3 (Apple Git-145)
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ