lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 29 Jun 2010 16:59:51 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	paulus <paulus@...ba.org>,
	stephane eranian <eranian@...glemail.com>,
	Robert Richter <robert.richter@....com>,
	Will Deacon <will.deacon@....com>,
	Paul Mundt <lethal@...ux-sh.org>,
	Cyrill Gorcunov <gorcunov@...il.com>,
	Lin Ming <ming.m.lin@...el.com>,
	Yanmin <yanmin_zhang@...ux.intel.com>,
	Deng-Cheng Zhu <dengcheng.zhu@...il.com>,
	David Miller <davem@...emloft.net>,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH 09/11] perf: Default PMU ops

On Tue, 2010-06-29 at 16:58 +0200, Frederic Weisbecker wrote:
> On Thu, Jun 24, 2010 at 04:28:13PM +0200, Peter Zijlstra wrote:
> > Provide default implementations for the pmu txn methods, this allows
> > us to remove some conditional code.
> > 
> > Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
> > ---
> >  kernel/perf_event.c |   48 ++++++++++++++++++++++++++++++++++++------------
> >  1 file changed, 36 insertions(+), 12 deletions(-)
> > 
> > Index: linux-2.6/kernel/perf_event.c
> > ===================================================================
> > --- linux-2.6.orig/kernel/perf_event.c
> > +++ linux-2.6/kernel/perf_event.c
> > @@ -656,21 +656,14 @@ group_sched_in(struct perf_event *group_
> >  {
> >  	struct perf_event *event, *partial_group = NULL;
> >  	struct pmu *pmu = group_event->pmu;
> > -	bool txn = false;
> >  
> >  	if (group_event->state == PERF_EVENT_STATE_OFF)
> >  		return 0;
> >  
> > -	/* Check if group transaction availabe */
> > -	if (pmu->start_txn)
> > -		txn = true;
> > -
> > -	if (txn)
> > -		pmu->start_txn(pmu);
> > +	pmu->start_txn(pmu);
> >  
> >  	if (event_sched_in(group_event, cpuctx, ctx)) {
> > -		if (txn)
> > -			pmu->cancel_txn(pmu);
> > +		pmu->cancel_txn(pmu);
> >  		return -EAGAIN;
> >  	}
> >  
> > @@ -684,7 +677,7 @@ group_sched_in(struct perf_event *group_
> >  		}
> >  	}
> >  
> > -	if (!txn || !pmu->commit_txn(pmu))
> > +	if (!pmu->commit_txn(pmu))
> >  		return 0;
> >  
> >  group_error:
> > @@ -699,8 +692,7 @@ group_error:
> >  	}
> >  	event_sched_out(group_event, cpuctx, ctx);
> >  
> > -	if (txn)
> > -		pmu->cancel_txn(pmu);
> > +	pmu->cancel_txn(pmu);
> >  
> >  	return -EAGAIN;
> >  }
> > @@ -4755,6 +4747,26 @@ static struct list_head pmus;
> >  static DEFINE_MUTEX(pmus_lock);
> >  static struct srcu_struct pmus_srcu;
> >  
> > +static void perf_pmu_nop(struct pmu *pmu)
> > +{
> > +}
> > +
> > +static void perf_pmu_start_txn(struct pmu *pmu)
> > +{
> > +	perf_pmu_disable(pmu);
> > +}
> > +
> > +static int perf_pmu_commit_txn(struct pmu *pmu)
> > +{
> > +	perf_pmu_enable(pmu);
> > +	return 0;
> > +}
> > +
> > +static void perf_pmu_cancel_txn(struct pmu *pmu)
> > +{
> > +	perf_pmu_enable(pmu);
> > +}
> 
> 
> So why do you need perf_pmu_*able wrappers now that you brings stubs
> if none is provided?
> 
> Actually, one problem is that it makes calling two indirect nops
> for software events.
> 
> Should the txn things really map to the enable/disable ops is the
> off-case? Probably better let pmu implementations deal with that.
> If they didn't provide txn implementations, it means they don't need it,
> hence it should directly map to a nop.
> 

You mean, if (!pmu->start_txn && pmu->pmu_enable) { /* install defaults
*/ } ?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists