lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150409191029.GA29855@us.ibm.com>
Date:	Thu, 9 Apr 2015 12:10:29 -0700
From:	Sukadev Bhattiprolu <sukadev@...ux.vnet.ibm.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Paul Mackerras <paulus@...ba.org>,
	Arnaldo Carvalho de Melo <acme@...nel.org>,
	mingo@...hat.com, Michael Ellerman <mpe@...erman.id.au>,
	dev@...yps.com, linux-kernel@...r.kernel.org,
	linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH v2 4/5] perf: Define PMU_TXN_READ interface

Peter Zijlstra [peterz@...radead.org] wrote:
| On Tue, Apr 07, 2015 at 05:34:58PM -0700, Sukadev Bhattiprolu wrote:
| > diff --git a/kernel/events/core.c b/kernel/events/core.c
| > index 1ac99d1..a001582 100644
| > --- a/kernel/events/core.c
| > +++ b/kernel/events/core.c
| > @@ -3644,6 +3644,33 @@ static void orphans_remove_work(struct work_struct *work)
| >  	put_ctx(ctx);
| >  }
| >  
| > +/*
| > + * Use the transaction interface to read the group of events in @leader.
| > + * PMUs like the 24x7 counters in Power, can use this to queue the events
| > + * in the ->read() operation and perform the actual read in ->commit_txn.
| > + *
| > + * Other PMUs can ignore the ->start_txn and ->commit_txn and read each
| > + * PMU directly in the ->read() operation.
| > + */
| > +static int perf_event_read_txn(struct perf_event *leader)
| 
| perf_event_read_group() might be a better name. Ah, I see that's already
| taken. Bugger.
| 
| See the below patch.

Sure, will include your patch in the next version and use
perf_event_read_group().

| 
| > +{
| > +	int ret;
| > +	struct perf_event *sub;
| > +	struct pmu *pmu;
| > +
| > +	pmu = leader->pmu;
| > +
| > +	pmu->start_txn(pmu, PERF_PMU_TXN_READ);
| > +
| > +	perf_event_read(leader);
| > +	list_for_each_entry(sub, &leader->sibling_list, group_entry)
| > +		perf_event_read(sub);
| > +
| > +	ret = pmu->commit_txn(pmu, PERF_PMU_TXN_READ);
| > +
| > +	return ret;
| > +}
| 
| And while were here, should we change the NOP txn implementation to not
| call perf_pmu_disable for TXN_READ ?
| 
| That seems entirely unneeded in this case.

Yes. Should we use a per-cpu, per-pmu variable to save/check the
transaction type like this? (I am using the cpuhw->group_flag in
x86, Power and other PMUs)


>From 2f3658b0b131739dc08e0d6d579e58864d1777bc Mon Sep 17 00:00:00 2001
From: Sukadev Bhattiprolu <sukadev@...ux.vnet.ibm.com>
Date: Thu, 9 Apr 2015 13:47:50 -0400
Subject: [PATCH 1/1] perf: Have NOP txn interface ignore non ADD txns

The NOP txn interface should ignore non TXN_ADD transactions and
avoid disabling/enabling the PMU.

Use a per-cpu, per-PMU flag to store/check the type of transaction
in progress.

Thanks to Peter Zijlstra for the input.

Signed-off-by: Sukadev Bhattiprolu <sukadev@...ux.vnet.ibm.com>
---

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 9e869b2..9466864 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -160,7 +160,10 @@ struct perf_event;
 /*
  * Common implementation detail of pmu::{start,commit,cancel}_txn
  */
-#define PERF_EVENT_TXN 0x1
+#define PERF_EVENT_TXN_ADD      0x1    /* txn to add/schedule event on PMU */
+#define PERF_EVENT_TXN_READ     0x2    /* txn to add/schedule event on PMU */
+
+#define PERF_EVENT_TXN_MASK     (PERF_EVENT_TXN_ADD|PERF_EVENT_TXN_READ)
 
 /**
  * pmu::capabilities flags
@@ -240,8 +243,10 @@ struct pmu {
         *
         * Start the transaction, after this ->add() doesn't need to
         * do schedulability tests.
+        *
+        * Optional.
         */
-       void (*start_txn)               (struct pmu *pmu); /* optional */
+       void (*start_txn)               (struct pmu *pmu, int flags);
        /*
         * If ->start_txn() disabled the ->add() schedulability test
         * then ->commit_txn() is required to perform one. On success
@@ -534,6 +534,7 @@ struct perf_cpu_context {
 	ktime_t				hrtimer_interval;
 	struct pmu			*unique_pmu;
 	struct perf_cgroup		*cgrp;
+	int				group_flag;
 };
 
 struct perf_output_handle {
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 0ebc468..08d0c3e 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6746,18 +6746,35 @@ static int perf_pmu_nop_int(struct pmu *pmu)
 
 static void perf_pmu_start_txn(struct pmu *pmu, int flags)
 {
-	perf_pmu_disable(pmu);
+	struct perf_cpu_context *cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
+
+	BUG_ON(cpuctx->group_flag);
+
+	cpuctx->group_flag = flags;
+
+	if (flags & PERF_EVENT_TXN_ADD)
+		perf_pmu_disable(pmu);
 }
 
 static int perf_pmu_commit_txn(struct pmu *pmu)
 {
-	perf_pmu_enable(pmu);
+	struct perf_cpu_context *cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
+
+	if (cpuctx->group_flag & PERF_EVENT_TXN_ADD)
+		perf_pmu_enable(pmu);
+
+	cpuctx->group_flag &= ~PERF_EVENT_TXN_MASK;
 	return 0;
 }
 
 static void perf_pmu_cancel_txn(struct pmu *pmu)
 {
-	perf_pmu_enable(pmu);
+	struct perf_cpu_context *cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
+
+	if (cpuctx->group_flag & PERF_EVENT_TXN_ADD)
+		perf_pmu_enable(pmu);
+
+	cpuctx->group_flag &= ~PERF_EVENT_TXN_MASK;
 }
 
 static int perf_event_idx_default(struct perf_event *event)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ