[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100330191813.GF5211@lenovo>
Date: Tue, 30 Mar 2010 23:18:13 +0400
From: Cyrill Gorcunov <gorcunov@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Robert Richter <robert.richter@....com>,
Stephane Eranian <eranian@...gle.com>,
Ingo Molnar <mingo@...e.hu>,
LKML <linux-kernel@...r.kernel.org>,
Lin Ming <ming.m.lin@...el.com>
Subject: Re: [PATCH 0/3] perf/core, x86: unify perfctr bitmasks
On Tue, Mar 30, 2010 at 09:04:00PM +0200, Peter Zijlstra wrote:
> On Tue, 2010-03-30 at 22:29 +0400, Cyrill Gorcunov wrote:
> > On Tue, Mar 30, 2010 at 06:55:13PM +0200, Peter Zijlstra wrote:
> > [...]
> > > -static int p4_hw_config(struct perf_event_attr *attr, struct hw_perf_event *hwc)
> > > +static int p4_hw_config(struct perf_event *event)
> > > {
> > > int cpu = raw_smp_processor_id();
> > > u32 escr, cccr;
> > > @@ -444,11 +431,29 @@ static int p4_hw_config(struct perf_even
> > > */
> > >
> > > cccr = p4_default_cccr_conf(cpu);
> > > - escr = p4_default_escr_conf(cpu, attr->exclude_kernel, attr->exclude_user);
> > > - hwc->config = p4_config_pack_escr(escr) | p4_config_pack_cccr(cccr);
> > > + escr = p4_default_escr_conf(cpu, event->attr.exclude_kernel,
> > > + event->attr.exclude_user);
> > > + event->hw.config = p4_config_pack_escr(escr) |
> > > + p4_config_pack_cccr(cccr);
> > >
> > > if (p4_ht_active() && p4_ht_thread(cpu))
> > > - hwc->config = p4_set_ht_bit(hwc->config);
> > > + event->hw.config = p4_set_ht_bit(event->hw.config);
> > > +
> > > + if (event->attr.type != PERF_TYPE_RAW)
> > > + return 0;
> > > +
> > > + /*
> > > + * We don't control raw events so it's up to the caller
> > > + * to pass sane values (and we don't count the thread number
> > > + * on HT machine but allow HT-compatible specifics to be
> > > + * passed on)
> > > + *
> > > + * XXX: HT wide things should check perf_paranoid_cpu() &&
> > > + * CAP_SYS_ADMIN
> > > + */
> > > + event->hw.config |= event->attr.config &
> > > + (p4_config_pack_escr(P4_ESCR_MASK_HT) |
> > > + p4_config_pack_cccr(P4_CCCR_MASK_HT));
> > >
> > > return 0;
> > > }
> > [...]
> >
> > P4 events thread specific is a bit more messy in compare with
> > architectural events. There are thread specific (TS) and thread
> > independent (TI) events. The exact effect of mixing flags from
> > what we call "ANY" bit is described in two matrix in SDM.
> >
> > So to make code simplier I chose to just bind events to a
> > particular logical cpu, when event migrate to say a second cpu
> > the bits just flipped in accordance on which cpu the event is
> > going to run. Pretty simple. Even more -- if there was some
> > RAW event which have set "ANY" bit -- they all will be just stripped
> > and event get bound to a single cpu.
> >
> > I'll try to find out an easy way to satisfy this "ANY" bit request
> > though it would require some time (perhaps today later or rather
> > tomorrow).
>
> Right, so don't worry about actively supporting ANY on regular events,
> wider than logical cpu counting is a daft thing.
>
> What would be nice to detect is if the raw event provided would be a TI
> (ANY) event, in which case we should apply the extra paranoia.
>
Well, there is a side effect would be anyway, so I think it should be
fixed via the way like: if a caller wanna get ANY event and this caller
has enough rights for that -- go ahead you'll get what you want,
kernel is not going to do a dirty work for you :) so I would need only
fix two procedures -- event assignment (where permission will be checked
as well) and event migration where I will not do any additional work for
the caller.
At least if I not miss anything it should not be quite difficult and
invasive. Will check and send patch... later a bit. At moment we're
on a safe side anyway, ie the former patch is fine for me!
-- Cyrill
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists