lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 15 Jan 2015 14:31:57 +0200
From:	Alexander Shishkin <alexander.shishkin@...ux.intel.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
	Robert Richter <rric@...nel.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Mike Galbraith <efault@....de>,
	Paul Mackerras <paulus@...ba.org>,
	Stephane Eranian <eranian@...gle.com>,
	Andi Kleen <ak@...ux.intel.com>, kan.liang@...el.com,
	adrian.hunter@...el.com, markus.t.metzger@...el.com,
	mathieu.poirier@...aro.org, Kaixu Xia <kaixu.xia@...aro.org>,
	acme@...radead.org
Subject: Re: [PATCH v9 12/14] x86: perf: intel_pt: Intel PT PMU driver

Peter Zijlstra <peterz@...radead.org> writes:

> On Wed, Jan 14, 2015 at 02:18:21PM +0200, Alexander Shishkin wrote:
>> +static __init int pt_init(void)
>> +{
>
>> +	pt_pmu.pmu.attr_groups	= pt_attr_groups;
>> +	pt_pmu.pmu.task_ctx_nr	= perf_hw_context;
>
> I just noticed this one, how can this ever work? We want the PT thing to
> always get programmed, right? -- because we disallow creating more than
> 1?
>
> Which reminds me; does that exclusive thing you did not allow you to
> create one cpu wide and one per task (they're separate contexts) events?
> At which point we're not schedulable at all.
>
> By sticking it on the HW context list it can end up not being programed
> because its stuck after a bunch of hardware events that don't all fit on
> the PMU.
>
> Would not the SW list be more appropriate; the SW list is a list of
> events that's guaranteed to be schedulable.

You're right, of course.

As for the exclusive events, how about something like the code below (on
top of the previous exclusive event patch)? The only remaining issue
that I see is creating cpu-wide events in the presence of per-thread
(event->cpu==-1) events. Both would still work, but only one of them
will actually get scheduled at a time. I'm thinking about adding a
counter for per-thread events to struct pmu for this purpose, so that if
any are present, we can disallow creating cpu-wide events. Or, we can
leave it as it is.

What do you think?

---
 kernel/events/core.c | 52 +++++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 47 insertions(+), 5 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index cf0bf99f53..e8c86530e2 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7688,14 +7688,11 @@ static bool exclusive_event_match(struct perf_event *e1, struct perf_event *e2)
 	return false;
 }
 
-static bool exclusive_event_ok(struct perf_event *event,
-			      struct perf_event_context *ctx)
+static bool __exclusive_event_ok(struct perf_event *event,
+				 struct perf_event_context *ctx)
 {
 	struct perf_event *iter_event;
 
-	if (!(event->pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE))
-		return true;
-
 	list_for_each_entry(iter_event, &ctx->event_list, event_entry) {
 		if (exclusive_event_match(iter_event, event))
 			return false;
@@ -7704,6 +7701,51 @@ static bool exclusive_event_ok(struct perf_event *event,
 	return true;
 }
 
+static bool __exclusive_event_ok_on_cpu(struct perf_event *event, int cpu)
+{
+	struct perf_event_context *cpuctx;
+	bool ret;
+
+	cpuctx = find_get_context(event->pmu, NULL, cpu);
+	mutex_lock(&cpuctx->mutex);
+	ret = __exclusive_event_ok(event, cpuctx);
+	perf_unpin_context(cpuctx);
+	put_ctx(cpuctx);
+	mutex_unlock(&cpuctx->mutex);
+
+	return ret;
+}
+
+static bool exclusive_event_ok(struct perf_event *event,
+			       struct perf_event_context *ctx)
+{
+	bool ret = true;
+
+	if (!(event->pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE))
+		return true;
+
+	if (!__exclusive_event_ok(event, ctx))
+		return false;
+
+	if (ctx->task) {
+		int cpu;
+
+		if (event->cpu == -1) {
+			get_online_cpus();
+			for_each_online_cpu(cpu) {
+				ret = __exclusive_event_ok_on_cpu(event, cpu);
+				if (ret)
+					break;
+			}
+			put_online_cpus();
+		} else {
+			ret = __exclusive_event_ok_on_cpu(event, event->cpu);
+		}
+	}
+
+	return ret;
+}
+
 /**
  * sys_perf_event_open - open a performance event, associate it to a task/cpu
  *
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists