[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150415165541.GZ5029@twins.programming.kicks-ass.net>
Date: Wed, 15 Apr 2015 18:55:41 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Andi Kleen <andi@...stfloor.org>
Cc: Kan Liang <kan.liang@...el.com>, acme@...nel.org,
eranian@...gle.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2 1/6] perf,core: allow invalid context events to be
part of sw/hw groups
On Wed, Apr 15, 2015 at 06:21:11PM +0200, Andi Kleen wrote:
> On Wed, Apr 15, 2015 at 06:15:28PM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 15, 2015 at 03:56:11AM -0400, Kan Liang wrote:
> > > From: Kan Liang <kan.liang@...el.com>
> > >
> > > The pmu marked as perf_invalid_context don't have any state to switch on
> > > context switch. Everything is global. So it is OK to be part of sw/hw
> > > groups.
> > > In sched_out/sched_in, del/add must be called, so the
> > > perf_invalid_context event can be disabled/enabled accordingly during
> > > context switch. The event count only be read when the event is already
> > > sched_in.
> > >
> > > However group read doesn't work with mix events.
> > >
> > > For example,
> > > perf record -e '{cycles,uncore_imc_0/cas_count_read/}:S' -a sleep 1
> > > It always gets EINVAL.
> > >
> > > This patch set intends to fix this issue.
> > > perf record -e '{cycles,uncore_imc_0/cas_count_read/}:S' -a sleep 1
> > > [ perf record: Woken up 1 times to write data ]
> > > [ perf record: Captured and wrote 0.202 MB perf.data (12 samples) ]
> > >
> > > This patch special case invalid context events and allow them to be part
> > > of sw/hw groups.
> >
> > I don't get it. What, Why?
>
> Without the patch you can't mix uncore and cpu core events in the same
> group.
>
> Collecting uncore in PMIs is useful, for example to get memory
> bandwidth over time.
Well, start with a coherent changelog, why do you still think those are
optional?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists