lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170516081923.fxg67gawc44eg6i6@hirez.programming.kicks-ass.net>
Date:   Tue, 16 May 2017 10:19:23 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:     Steven Rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Masami Hiramatsu <mhiramat@...nel.org>
Subject: Re: [RFC][PATCH 0/5] perf/tracing/cpuhotplug: Fix locking order

On Mon, May 15, 2017 at 11:40:43AM -0700, Paul E. McKenney wrote:

> Given that you acquire the global pmus_lock when doing the
> get_online_cpus(), and given that CPU hotplug is rare, is it possible
> to momentarily acquire the global pmus_lock in perf_event_init_cpu()
> and perf_event_exit_cpu() and interact directly with that?  Then perf
> would presumably leave alone any outgoing CPU that had already executed
> perf_event_exit_cpu(), and also any incoming CPU that had not already
> executed perf_event_init_cpu().
> 
> What prevents this approach from working?

Lack of sleep probably ;-)

I'd blame the kids, but those have actually been very good lately.

You're suggesting the below on top, right? I'll run it with lockdep
enabled after I chase some regression..

---
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -8997,7 +8997,6 @@ int perf_pmu_register(struct pmu *pmu, c
 {
 	int cpu, ret;
 
-	get_online_cpus();
 	mutex_lock(&pmus_lock);
 	ret = -ENOMEM;
 	pmu->pmu_disable_count = alloc_percpu(int);
@@ -9093,7 +9092,6 @@ int perf_pmu_register(struct pmu *pmu, c
 	ret = 0;
 unlock:
 	mutex_unlock(&pmus_lock);
-	put_online_cpus();
 
 	return ret;
 
@@ -11002,10 +11000,9 @@ static void perf_event_exit_cpu_context(
 	struct perf_cpu_context *cpuctx;
 	struct perf_event_context *ctx;
 	struct pmu *pmu;
-	int idx;
 
-	idx = srcu_read_lock(&pmus_srcu);
-	list_for_each_entry_rcu(pmu, &pmus, entry) {
+	mutex_lock(&pmus_lock);
+	list_for_each_entry(pmu, &pmus, entry) {
 		cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu);
 		ctx = &cpuctx->ctx;
 
@@ -11014,7 +11011,7 @@ static void perf_event_exit_cpu_context(
 		cpuctx->online = 0;
 		mutex_unlock(&ctx->mutex);
 	}
-	srcu_read_unlock(&pmus_srcu, idx);
+	mutex_unlock(&pmus_lock);
 }
 #else
 
@@ -11027,12 +11024,11 @@ int perf_event_init_cpu(unsigned int cpu
 	struct perf_cpu_context *cpuctx;
 	struct perf_event_context *ctx;
 	struct pmu *pmu;
-	int idx;
 
 	perf_swevent_init_cpu(cpu);
 
-	idx = srcu_read_lock(&pmus_srcu);
-	list_for_each_entry_rcu(pmu, &pmus, entry) {
+	mutex_lock(&pmus_lock);
+	list_for_each_entry(pmu, &pmus, entry) {
 		cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu);
 		ctx = &cpuctx->ctx;
 
@@ -11040,7 +11036,7 @@ int perf_event_init_cpu(unsigned int cpu
 		cpuctx->online = 1;
 		mutex_unlock(&ctx->mutex);
 	}
-	srcu_read_unlock(&pmus_srcu, idx);
+	mutex_unlock(&pmus_lock);
 
 	return 0;
 }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ