[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YW1C5okzq/1BSLQy@hirez.programming.kicks-ass.net>
Date: Mon, 18 Oct 2021 11:48:22 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Michael Ellerman <mpe@...erman.id.au>
Cc: Kajol Jain <kjain@...ux.ibm.com>, linuxppc-dev@...ts.ozlabs.org,
linux-kernel@...r.kernel.org, mingo@...hat.com, acme@...nel.org,
jolsa@...nel.org, namhyung@...nel.org, ak@...ux.intel.com,
linux-perf-users@...r.kernel.org, maddy@...ux.ibm.com,
atrajeev@...ux.vnet.ibm.com, rnsastry@...ux.ibm.com,
yao.jin@...ux.intel.com, ast@...nel.org, daniel@...earbox.net,
songliubraving@...com, kan.liang@...ux.intel.com,
mark.rutland@....com, alexander.shishkin@...ux.intel.com,
paulus@...ba.org
Subject: Re: [PATCH v3 0/4] Add mem_hops field in perf_mem_data_src structure
On Mon, Oct 18, 2021 at 02:46:18PM +1100, Michael Ellerman wrote:
> Peter Zijlstra <peterz@...radead.org> writes:
> > On Wed, Oct 06, 2021 at 07:36:50PM +0530, Kajol Jain wrote:
> >
> >> Kajol Jain (4):
> >> perf: Add comment about current state of PERF_MEM_LVL_* namespace and
> >> remove an extra line
> >> perf: Add mem_hops field in perf_mem_data_src structure
> >> tools/perf: Add mem_hops field in perf_mem_data_src structure
> >> powerpc/perf: Fix data source encodings for L2.1 and L3.1 accesses
> >>
> >> arch/powerpc/perf/isa207-common.c | 26 +++++++++++++++++++++-----
> >> arch/powerpc/perf/isa207-common.h | 2 ++
> >> include/uapi/linux/perf_event.h | 19 ++++++++++++++++---
> >> tools/include/uapi/linux/perf_event.h | 19 ++++++++++++++++---
> >> tools/perf/util/mem-events.c | 20 ++++++++++++++++++--
> >> 5 files changed, 73 insertions(+), 13 deletions(-)
> >
> > Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> >
> > How do we want this routed? Shall I take it, or does Michael want it in
> > the Power tree?
>
> It's mostly non-powerpc, so I think you should take it.
>
> There's a slim chance we could end up with a conflict in the powerpc
> part, but that's no big deal.
Sure thing, into perf/core it goes. Thanks!
Powered by blists - more mailing lists