lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3dd30190-b266-826d-3e2d-91f1446cc5fc@arm.com>
Date:   Fri, 1 Nov 2019 15:30:19 +0000
From:   Robert Walker <robert.walker@....com>
To:     Leo Yan <leo.yan@...aro.org>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Mathieu Poirier <mathieu.poirier@...aro.org>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Mike Leach <mike.leach@...aro.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Mark Rutland <mark.rutland@....com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...hat.com>,
        Namhyung Kim <namhyung@...nel.org>,
        linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Coresight ML <coresight@...ts.linaro.org>
Subject: Re: [PATCH v2 1/4] perf cs-etm: Continuously record last branches

On 01/11/2019 02:07, Leo Yan wrote:
> Every time synthesize instruction sample, the last branches recording
> will be reset.  This would be fine if the instruction period is big
> enough, for example if we use the option '--itrace=i100000', the last
> branch array is reset for every instruction sample (10000 instructions
> per period); before generate the next instruction sample, there has the
> enough packets coming to fill last branch array.  On the other hand,
> if set a very small period, the packets will be significantly reduced
> between two continuous instruction samples, thus if the last branch
> array is reset for the previous instruction sample, it's almost empty
> for the next instruction sample.
>
> To allow the last branches to work for any instruction periods, this
> patch avoids to reset the last branches for every instruction sample
> and only reset it when flush the trace data.  The last branches will
> be reset only for two cases, one is for trace starting, another case
> is for discontinuous trace; thus it can continuously record last
> branches.

Is this the right thing to do?  This would cause profiling tools to 
count the same branch several times if it appears in multiple 
instruction samples, which could result in a biased profile.

The current implementation matches the behavior of intel_pt where the 
branch buffer is reset after each sample, so  the instruction sample 
only includes branches since the previous sample.

However x86 lbr (perf record -b) does appear to repeat the same full 
branch stack on several samples until a new stack is captured.

I'm not sure what the right or wrong answer is here.  For AutoFDO, we're 
likely to use a much bigger period (>10000 instructions) so won't be 
affected, but other tools might be.

Regards


Rob


> Signed-off-by: Leo Yan <leo.yan@...aro.org>
> ---
>   tools/perf/util/cs-etm.c | 7 ++++---
>   1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
> index f5f855fff412..8be6d010ae84 100644
> --- a/tools/perf/util/cs-etm.c
> +++ b/tools/perf/util/cs-etm.c
> @@ -1153,9 +1153,6 @@ static int cs_etm__synth_instruction_sample(struct cs_etm_queue *etmq,
>   			"CS ETM Trace: failed to deliver instruction event, error %d\n",
>   			ret);
>   
> -	if (etm->synth_opts.last_branch)
> -		cs_etm__reset_last_branch_rb(tidq);
> -
>   	return ret;
>   }
>   
> @@ -1486,6 +1483,10 @@ static int cs_etm__flush(struct cs_etm_queue *etmq,
>   		tidq->prev_packet = tmp;
>   	}
>   
> +	/* Reset last branches after flush the trace */
> +	if (etm->synth_opts.last_branch)
> +		cs_etm__reset_last_branch_rb(tidq);
> +
>   	return err;
>   }
>   

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ