[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190318093725.GJ6058@hirez.programming.kicks-ass.net>
Date: Mon, 18 Mar 2019 10:37:25 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: "Lendacky, Thomas" <Thomas.Lendacky@....com>
Cc: "x86@...nel.org" <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Namhyung Kim <namhyung@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Jiri Olsa <jolsa@...hat.com>
Subject: Re: [RFC PATCH v2 1/2] x86/perf/amd: Resolve race condition when
disabling PMC
On Fri, Mar 15, 2019 at 08:40:53PM +0000, Lendacky, Thomas wrote:
> +void amd_pmu_disable_all(void)
> +{
> + unsigned long overflow_check[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
> + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> + int idx;
> +
> + bitmap_zero(overflow_check, X86_PMC_IDX_MAX);
> +
> + for (idx = 0; idx < x86_pmu.num_counters; idx++) {
> + u64 val;
> +
> + if (!test_bit(idx, cpuc->active_mask))
> + continue;
> +
> + rdmsrl(x86_pmu_config_addr(idx), val);
> + if (!(val & ARCH_PERFMON_EVENTSEL_ENABLE))
> + continue;
> +
> + val &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
> + wrmsrl(x86_pmu_config_addr(idx), val);
> +
> + /*
> + * If the interrupt is enabled, this counter must be checked
> + * for an overflow condition to avoid possibly changing the
> + * counter value before the NMI handler runs.
> + */
> + if (val & ARCH_PERFMON_EVENTSEL_INT)
> + __set_bit(idx, overflow_check);
> + }
I think you can ditch overflow_check and directly call
x86_pmu_disable_all() here.
> +
> + /*
> + * This shouldn't be called from NMI context, but add a safeguard here
> + * to return, since if we're in NMI context we can't wait for an NMI
> + * to reset an overflowed counter value.
> + */
> + if (in_nmi())
> + return;
> +
> + /*
> + * Check each counter for overflow and wait for it to be reset by the
> + * NMI if it has overflowed.
> + */
> + for (idx = 0; idx < x86_pmu.num_counters; idx++) {
> + if (!test_bit(idx, overflow_check))
And simply iterate cpuc->active_mask again here.
> + continue;
> +
> + amd_pmu_wait_on_overflow(idx);
> + }
> +}
Because, per x86_pmu_hw_config() we _always_ have EVENTSEL_INT set, even
for !sampling events -- such that we can deal with overflow, and we
should 'always' have EVENTSEL_ENABLE set when 'active', see
x86_pmu_enable_all().
Powered by blists - more mailing lists