lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dec78c67-0631-c4a7-95e1-4b931ce9ecd7@linux.intel.com>
Date:   Fri, 14 Sep 2018 09:47:49 +0300
From:   Alexey Budankov <alexey.budankov@...ux.intel.com>
To:     linux-kernel-owner@...r.kernel.org, peterz@...radead.org,
        tglx@...utronix.de, acme@...nel.org, mingo@...hat.com,
        linux-kernel@...r.kernel.org
Cc:     ak@...ux.intel.com, jolsa@...hat.com, namhyung@...nel.org,
        Kan Liang <kan.liang@...ux.intel.com>
Subject: Re: [PATCH] perf/x86/intel/lbr: Optimize context switches for LBR

Hi,

On 13.09.2018 23:08, linux-kernel-owner@...r.kernel.org wrote:
> From: Kan Liang <kan.liang@...ux.intel.com>
> 
> LBR can bring big overhead when the benchmark has high context switches.
> For example, a sub benchmark of Dacapo, avrora.
> 
> Baseline: java -jar dacapo-9.12-MR1-bach.jar avrora -n 20
> With LBR: perf record --branch-filter any,u -- java -jar
> dacapo-9.12-MR1-bach.jar avrora -n 20
> 
> Baseline (ms)    With LBR (ms)    Overhead
> 6508		 19831		  205%
> 
> In principle the LBRs need to be flushed between threads. So does
> current code.

IMHO, ideally, LBRs stack would be preserved and restored when 
switching between execution stacks. That would allow implementing 
per-thread statistical call graph view in Perf tools, fully based 
on HW capabilities. It could be advantageous for some cases, in 
comparison with traditional dwarf based call graph. 

To me virtualization looks similar to e.g. HW performance counters 
whose values are switched back and forth from perf_event object
on context switches. But this is surely bigger effort.

Thanks,
Alexey

> 
> However in practice the LBRs clear very quickly when any code runs,
> so it is unlikely to be a functional problem of LBR use for sampling
> if there is a small leak shortly after each context switch.
> It is mainly a security issue that we don't want to leak anything to an
> attacker.
> 
> Different threads in a process already must trust each other so we can
> safely leak in this case without opening security holes.
> 
> When switching to kernel threads (such as the common switch to idle
> case) which also share the same mm and are guaranteed to not be
> attackers.
> 
> For those cases, resetting the LBRs can be safely avoid.
> Checking ctx_id, only resetting the LBRs when switching to a different
> user process.
> 
> With the patch,
> Baseline (ms)    With LBR (ms)    Overhead
> 6508		 10350            59%
> 
> Reported-by: Sandhya Viswanathan <sandhya.viswanathan@...el.com>
> Suggested-by: Andi Kleen <ak@...ux.intel.com>
> Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
> ---
>  arch/x86/events/intel/lbr.c  | 16 ++++++++++++++--
>  arch/x86/events/perf_event.h |  1 +
>  2 files changed, 15 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
> index f3e006b..26344c4 100644
> --- a/arch/x86/events/intel/lbr.c
> +++ b/arch/x86/events/intel/lbr.c
> @@ -444,9 +444,21 @@ void intel_pmu_lbr_sched_task(struct perf_event_context *ctx, bool sched_in)
>  	 * are not tagged with an identifier, we need to wipe the LBR, even for
>  	 * per-cpu events. You simply cannot resolve the branches from the old
>  	 * address space.
> +	 * We don't need to wipe the LBR for a kernel thread which share the
> +	 * same mm with previous user thread.
>  	 */
> -	if (sched_in)
> -		intel_pmu_lbr_reset();
> +	if (!current || !current->mm)
> +		return;
> +	if (sched_in) {
> +		/*
> +		 * Only flush when switching to user threads
> +		 * and mm context changed
> +		 */
> +		if (current->mm->context.ctx_id != cpuc->last_ctx_id)
> +			intel_pmu_lbr_reset();
> +	} else {
> +		cpuc->last_ctx_id = current->mm->context.ctx_id;
> +	}
>  }
>  
>  static inline bool branch_user_callstack(unsigned br_sel)
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index 1562863..3aa3379 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -217,6 +217,7 @@ struct cpu_hw_events {
>  	u64				br_sel;
>  	struct x86_perf_task_context	*last_task_ctx;
>  	int				last_log_id;
> +	u64				last_ctx_id;
>  
>  	/*
>  	 * Intel host/guest exclude bits
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ