[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190903222215.GD25787@xps15>
Date: Tue, 3 Sep 2019 16:22:15 -0600
From: Mathieu Poirier <mathieu.poirier@...aro.org>
To: Leo Yan <leo.yan@...aro.org>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
Suzuki K Poulose <suzuki.poulose@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
Namhyung Kim <namhyung@...nel.org>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
Mike Leach <mike.leach@...aro.org>,
Robert Walker <Robert.Walker@....com>,
Adrian Hunter <adrian.hunter@...el.com>
Subject: Re: [PATCH v1 1/3] perf cs-etm: Refactor instruction size handling
On Fri, Aug 30, 2019 at 02:24:19PM +0800, Leo Yan wrote:
> There has several code pieces need to know the instruction size, but
> now every place calculates the instruction size separately.
>
> This patch refactors to create a new function cs_etm__instr_size() as
> a central place to analyze the instruction length based on ISA type
> and instruction value.
>
> Signed-off-by: Leo Yan <leo.yan@...aro.org>
> ---
> tools/perf/util/cs-etm.c | 44 +++++++++++++++++++++++++++-------------
> 1 file changed, 30 insertions(+), 14 deletions(-)
>
> diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
> index b3a5daaf1a8f..882a0718033d 100644
> --- a/tools/perf/util/cs-etm.c
> +++ b/tools/perf/util/cs-etm.c
> @@ -914,6 +914,26 @@ static inline int cs_etm__t32_instr_size(struct cs_etm_queue *etmq,
> return ((instrBytes[1] & 0xF8) >= 0xE8) ? 4 : 2;
> }
>
> +static inline int cs_etm__instr_size(struct cs_etm_queue *etmq,
> + u8 trace_chan_id,
> + enum cs_etm_isa isa,
> + u64 addr)
> +{
> + int insn_len;
> +
> + /*
> + * T32 instruction size might be 32-bit or 16-bit, decide by calling
> + * cs_etm__t32_instr_size().
> + */
> + if (isa == CS_ETM_ISA_T32)
> + insn_len = cs_etm__t32_instr_size(etmq, trace_chan_id, addr);
> + /* Otherwise, A64 and A32 instruction size are always 32-bit. */
> + else
> + insn_len = 4;
> +
> + return insn_len;
> +}
> +
> static inline u64 cs_etm__first_executed_instr(struct cs_etm_packet *packet)
> {
> /* Returns 0 for the CS_ETM_DISCONTINUITY packet */
> @@ -938,19 +958,23 @@ static inline u64 cs_etm__instr_addr(struct cs_etm_queue *etmq,
> const struct cs_etm_packet *packet,
> u64 offset)
> {
> + int insn_len;
> +
> if (packet->isa == CS_ETM_ISA_T32) {
> u64 addr = packet->start_addr;
>
> while (offset > 0) {
> - addr += cs_etm__t32_instr_size(etmq,
> - trace_chan_id, addr);
> + addr += cs_etm__instr_size(etmq, trace_chan_id,
> + packet->isa, addr);
> offset--;
> }
> return addr;
> }
>
> - /* Assume a 4 byte instruction size (A32/A64) */
> - return packet->start_addr + offset * 4;
> + /* Return instruction size for A32/A64 */
> + insn_len = cs_etm__instr_size(etmq, trace_chan_id,
> + packet->isa, packet->start_addr);
> + return packet->start_addr + offset * insn_len;
This patch will work but from where I stand it makes things difficult to
understand more than anything else. It is also adding coupling between function
cs_etm__instr_addr() and cs_etm__instr_size(), meaning the code needs to be
carefully inspected in order to make changes to either one.
Last but not least function cs_etm__instr_size() isn't used in the upcoming
patches. I really don't see what is gained here.
Thanks,
Mathieu
> }
>
> static void cs_etm__update_last_branch_rb(struct cs_etm_queue *etmq,
> @@ -1090,16 +1114,8 @@ static void cs_etm__copy_insn(struct cs_etm_queue *etmq,
> return;
> }
>
> - /*
> - * T32 instruction size might be 32-bit or 16-bit, decide by calling
> - * cs_etm__t32_instr_size().
> - */
> - if (packet->isa == CS_ETM_ISA_T32)
> - sample->insn_len = cs_etm__t32_instr_size(etmq, trace_chan_id,
> - sample->ip);
> - /* Otherwise, A64 and A32 instruction size are always 32-bit. */
> - else
> - sample->insn_len = 4;
> + sample->insn_len = cs_etm__instr_size(etmq, trace_chan_id,
> + packet->isa, sample->ip);
>
> cs_etm__mem_access(etmq, trace_chan_id, sample->ip,
> sample->insn_len, (void *)sample->insn);
> --
> 2.17.1
>
Powered by blists - more mailing lists