[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190411130434.GD11158@hirez.programming.kicks-ass.net>
Date: Thu, 11 Apr 2019 15:04:34 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: mark.rutland@....com, mingo@...nel.org, acme@...hat.com,
tglx@...utronix.de, brueckner@...ux.ibm.com,
torvalds@...ux-foundation.org, alexander.shishkin@...ux.intel.com,
heiko.carstens@...ibm.com, tmricht@...ux.ibm.com,
schwidefsky@...ibm.com, linux-kernel@...r.kernel.org,
keescook@...omium.org, jolsa@...hat.com, hpa@...or.com
Cc: linux-tip-commits@...r.kernel.org
Subject: Re: [tip:perf/urgent] perf/core: Fix perf_event_disable_inatomic()
race
On Wed, Apr 10, 2019 at 05:13:54AM -0700, tip-bot for Peter Zijlstra wrote:
> Commit-ID: 86071b11317550d994b55ce5e31aa06bcad783b5
> Gitweb: https://git.kernel.org/tip/86071b11317550d994b55ce5e31aa06bcad783b5
> Author: Peter Zijlstra <peterz@...radead.org>
> AuthorDate: Thu, 4 Apr 2019 15:03:00 +0200
> Committer: Ingo Molnar <mingo@...nel.org>
> CommitDate: Wed, 10 Apr 2019 13:47:09 +0200
>
> perf/core: Fix perf_event_disable_inatomic() race
>
> Thomas-Mich Richter reported he triggered a WARN()ing from event_function_local()
> on his s390. The problem boils down to:
>
> CPU-A CPU-B
>
> perf_event_overflow()
> perf_event_disable_inatomic()
> @pending_disable = 1
> irq_work_queue();
>
> sched-out
> event_sched_out()
> @pending_disable = 0
>
> sched-in
> perf_event_overflow()
> perf_event_disable_inatomic()
> @pending_disable = 1;
> irq_work_queue(); // FAILS
>
> irq_work_run()
> perf_pending_event()
> if (@pending_disable)
> perf_event_disable_local(); // WHOOPS
>
> The problem exists in generic, but s390 is particularly sensitive
> because it doesn't implement arch_irq_work_raise(), nor does it call
> irq_work_run() from it's PMU interrupt handler (nor would that be
> sufficient in this case, because s390 also generates
> perf_event_overflow() from pmu::stop). Add to that the fact that s390
> is a virtual architecture and (virtual) CPU-A can stall long enough
> for the above race to happen, even if it would self-IPI.
>
> Adding a irq_work_sync() to event_sched_in() would work for all hardare
> PMUs that properly use irq_work_run() but fails for software PMUs.
>
> Instead encode the CPU number in @pending_disable, such that we can
> tell which CPU requested the disable. This then allows us to detect
> the above scenario and even redirect the IPI to make up for the failed
> queue.
Ingo, could you please fold in the below delta? It turns out I
overlooked two insteances :-(
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -392,7 +392,7 @@ void *perf_aux_output_begin(struct perf_
* store that will be enabled on successful return
*/
if (!handle->size) { /* A, matches D */
- event->pending_disable = 1;
+ event->pending_disable = smp_processor_id();
perf_output_wakeup(handle);
local_set(&rb->aux_nest, 0);
goto err_put;
@@ -480,7 +480,7 @@ void perf_aux_output_end(struct perf_out
if (wakeup) {
if (handle->aux_flags & PERF_AUX_FLAG_TRUNCATED)
- handle->event->pending_disable = 1;
+ handle->event->pending_disable = smp_processor_id();
perf_output_wakeup(handle);
}
Powered by blists - more mailing lists