[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-4b07372a32c0c1505a7634ad7e607d83340ef645@git.kernel.org>
Date: Fri, 17 Mar 2017 03:17:17 -0700
From: tip-bot for Andy Lutomirski <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: torvalds@...ux-foundation.org, alexander.shishkin@...ux.intel.com,
linux-kernel@...r.kernel.org, bpetkov@...e.de, luto@...nel.org,
tglx@...utronix.de, vincent.weaver@...ne.edu, eranian@...gle.com,
jolsa@...hat.com, hpa@...or.com, mingo@...nel.org,
peterz@...radead.org, acme@...hat.com
Subject: [tip:perf/urgent] x86/perf: Clarify why x86_pmu_event_mapped()
isn't racy
Commit-ID: 4b07372a32c0c1505a7634ad7e607d83340ef645
Gitweb: http://git.kernel.org/tip/4b07372a32c0c1505a7634ad7e607d83340ef645
Author: Andy Lutomirski <luto@...nel.org>
AuthorDate: Thu, 16 Mar 2017 12:59:40 -0700
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Fri, 17 Mar 2017 08:28:26 +0100
x86/perf: Clarify why x86_pmu_event_mapped() isn't racy
Naively, it looks racy, but ->mmap_sem saves it. Add a comment and a
lockdep assertion.
Signed-off-by: Andy Lutomirski <luto@...nel.org>
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@...hat.com>
Cc: Borislav Petkov <bpetkov@...e.de>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Jiri Olsa <jolsa@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Stephane Eranian <eranian@...gle.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Vince Weaver <vincent.weaver@...ne.edu>
Link: http://lkml.kernel.org/r/03a1e629063899168dfc4707f3bb6e581e21f5c6.1489694270.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/events/core.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index e07b36c..183a972 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2109,6 +2109,18 @@ static void x86_pmu_event_mapped(struct perf_event *event)
if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
return;
+ /*
+ * This function relies on not being called concurrently in two
+ * tasks in the same mm. Otherwise one task could observe
+ * perf_rdpmc_allowed > 1 and return all the way back to
+ * userspace with CR4.PCE clear while another task is still
+ * doing on_each_cpu_mask() to propagate CR4.PCE.
+ *
+ * For now, this can't happen because all callers hold mmap_sem
+ * for write. If this changes, we'll need a different solution.
+ */
+ lockdep_assert_held_exclusive(¤t->mm->mmap_sem);
+
if (atomic_inc_return(¤t->mm->context.perf_rdpmc_allowed) == 1)
on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1);
}
Powered by blists - more mailing lists