[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200417165329.GE17973@kernel.org>
Date: Fri, 17 Apr 2020 13:53:29 -0300
From: Arnaldo Carvalho de Melo <arnaldo.melo@...il.com>
To: kan.liang@...ux.intel.com
Cc: jolsa@...hat.com, peterz@...radead.org, mingo@...hat.com,
linux-kernel@...r.kernel.org, namhyung@...nel.org,
adrian.hunter@...el.com, mathieu.poirier@...aro.org,
ravi.bangoria@...ux.ibm.com, alexey.budankov@...ux.intel.com,
vitaly.slobodskoy@...el.com, pavel.gerasimov@...el.com,
mpe@...erman.id.au, eranian@...gle.com, ak@...ux.intel.com
Subject: Re: [PATCH V4 11/17] perf tools: Save previous cursor nodes for LBR
stitching approach
Em Thu, Mar 19, 2020 at 01:25:11PM -0700, kan.liang@...ux.intel.com escreveu:
> From: Kan Liang <kan.liang@...ux.intel.com>
>
> The cursor nodes which generates from sample are eventually added into
> callchain. To avoid generating cursor nodes from previous samples again,
> the previous cursor nodes are also saved for LBR stitching approach.
>
> Some option, e.g. hide-unresolved, may hide some LBRs.
> Add a variable 'valid' in struct callchain_cursor_node to indicate this
> case. The LBR stitching approach will only append the valid cursor nodes
> from previous samples later.
>
> Reviewed-by: Andi Kleen <ak@...ux.intel.com>
> Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
Applied this on top:
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 6e7f15b45389..737dee723a57 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -2364,8 +2364,7 @@ static bool alloc_lbr_stitch(struct thread *thread, unsigned int max_lbr)
return true;
free_lbr_stitch:
- free(thread->lbr_stitch);
- thread->lbr_stitch = NULL;
+ zfree(&thread->lbr_stitch);
err:
pr_warning("Failed to allocate space for stitched LBRs. Disable LBR stitch\n");
thread->lbr_stitch_enable = false;
diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h
index c2eb3f943724..8456174a52c5 100644
--- a/tools/perf/util/thread.h
+++ b/tools/perf/util/thread.h
@@ -161,7 +161,7 @@ static inline void thread__free_stitch_list(struct thread *thread)
if (!lbr_stitch)
return;
- free(lbr_stitch->prev_lbr_cursor);
+ zfree(&lbr_stitch->prev_lbr_cursor);
zfree(&thread->lbr_stitch);
}
Powered by blists - more mailing lists