[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140409135706.GD8488@redhat.com>
Date: Wed, 9 Apr 2014 09:57:06 -0400
From: Don Zickus <dzickus@...hat.com>
To: Namhyung Kim <namhyung@...il.com>
Cc: acme@...nel.org, peterz@...radead.org,
LKML <linux-kernel@...r.kernel.org>, jolsa@...hat.com,
jmario@...hat.com, fowles@...each.com, eranian@...gle.com,
andi.kleen@...el.com
Subject: Re: [PATCH 6/6] perf, sort: Allow unique sorting instead of
combining hist_entries
On Wed, Apr 09, 2014 at 02:31:00PM +0900, Namhyung Kim wrote:
> On Mon, 24 Mar 2014 15:34:36 -0400, Don Zickus wrote:
> > The cache contention tools needs to keep all the perf records unique in order
> > to properly parse all the data. Currently add_hist_entry() will combine
> > the duplicate record and add the weight/period to the existing record.
> >
> > This throws away the unique data the cache contention tool needs (mainly
> > the data source). Create a flag to force the records to stay unique.
>
> No. This is why I said you need to add 'mem' and 'snoop' sort keys into
> the c2c tool. This is not how sort works IMHO - if you need to make
> samples unique let the sort key(s) distinguish them somehow, or you can
> combine same samples (in terms of sort kes) and use the combined entry's
> stat.nr_events and stat.period or weight.
Ok. I understand your point. Perhaps this was my lack of fully
understanding the sorting algorithm when I did this. I can look into
adding the 'mem' and 'snoop'.
One concern I do have is we were caculating statistics based on the weight
(mean, median, stddev). I was afraid that combining the entries would
throw off our calculations as we could no longer accurately determine them
any more. Is that true?
Cheers,
Don
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists