[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150316204826.GK31334@tassilo.jf.intel.com>
Date: Mon, 16 Mar 2015 13:48:26 -0700
From: Andi Kleen <ak@...ux.intel.com>
To: "Liang, Kan" <kan.liang@...el.com>
Cc: Namhyung Kim <namhyung@...nel.org>,
"acme@...nel.org" <acme@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"eranian@...gle.com" <eranian@...gle.com>
Subject: Re: [PATCH 1/1] perf, tool: partial callgrap and time support in
perf record
On Mon, Mar 16, 2015 at 08:35:30PM +0000, Liang, Kan wrote:
>
>
> >
> > Hi Kan,
> >
> > On Fri, Mar 13, 2015 at 02:18:07AM +0000, kan.liang@...el.com wrote:
> > > From: Kan Liang <kan.liang@...el.com>
> > >
> > > When multiple events are sampled it may not be needed to collect
> > > callgraphs for all of them. The sample sites are usually nearby, and
> > > it's enough to collect the callgraphs on a reference event (such as
> > > precise cycles or precise instructions). Similarly we also don't need
> > > fine grained time stamps on all events, as it's enough to have time
> > > stamps on the regular reference events. This patchkit adds the ability
> > > to turn off callgraphs and time stamps per event. This in term can
> > > reduce sampling overhead and the size of the perf.data (add some data)
> >
> > Have you taken a look into group sampling feature?
> > (e.g. perf record -e '{ev1,ev2}:S')
> >
>
> I didn't find any issues when running group read.
> The patch doesn't change the behavior of group read features.
>
> Did you observe any issues after applying the patch?
I think Namhyungs questions was if group read can be used
instead to decrease the data size.
The answer is no: it solves a different problem. Group read
is just fine granuality counting. It cannot be used
to sample for multiple events in parallel.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists