lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191015091617.GF2311@hirez.programming.kicks-ass.net>
Date:   Tue, 15 Oct 2019 11:16:17 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Alexey Budankov <alexey.budankov@...ux.intel.com>
Cc:     Arnaldo Carvalho de Melo <acme@...nel.org>,
        Ingo Molnar <mingo@...hat.com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...hat.com>,
        Namhyung Kim <namhyung@...nel.org>,
        Andi Kleen <ak@...ux.intel.com>,
        Kan Liang <kan.liang@...ux.intel.com>,
        Stephane Eranian <eranian@...gle.com>,
        Ian Rogers <irogers@...gle.com>,
        Song Liu <songliubraving@...com>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v1] perf/core: fix restoring of Intel LBR call stack on a
 context switch

On Mon, Oct 14, 2019 at 09:08:34AM +0300, Alexey Budankov wrote:
> 
> Restore Intel LBR call stack from cloned inactive task perf context on
> a context switch. This change inherently addresses inconsistency in LBR 
> call stack data provided on a sample in record profiling mode for 
> example like this:
> 
>   $ perf record -N -B -T -R --call-graph lbr \
>          -e cpu/period=0xcdfe60,event=0x3c,name=\'CPU_CLK_UNHALTED.THREAD\'/Duk \
>          --clockid=monotonic_raw -- ./miniFE.x nx 25 ny 25 nz 25
> 
> Let's assume threads A, B, C belonging to the same process. 
> B and C are siblings of A and their perf contexts are treated as equivalent.
> At some point B blocks on a futex (non preempt context switch).
> B's LBRs are preserved at B's perf context task_ctx_data and B's events 
> are removed from PMU and disabled. B's perf context becomes inactive.
> 
> Later C gets on a cpu, runs, gets profiled and eventually switches to 
> the awaken but not yet running B. The optimized context switch path is 
> executed coping B's task_ctx_data to C's one and updating B's perf context 
> pointer to refer to C's task_ctx_data that contains preserved B's LBRs 
> after coping.
> 
> However, as far B's perf context is inactive there is no enabled events
> in there and B's task_ctx_data->lbr_callstack_users is equal to 0.
> When B gets on the cpu B's events reviving is skipped following
> the optimized context switch path and B's task_ctx_data->lbr_callstack_users
> remains 0. Thus B's LBR's are not restored by pmu sched_task() code called 
> in the end of perf context switch sched_in callback for B.
> 
> In the report that manifests as having short fragments of B's
> call stack, still tracked by LBR's HW between adjacent samples,
> but the whole thread call tree doesn't aggregate.
> 

> Signed-off-by: Alexey Budankov <alexey.budankov@...ux.intel.com>
> ---
>  kernel/events/core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 2aad959e6def..74c2ff38e079 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -3181,7 +3181,7 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn,
>  
>  	rcu_read_lock();
>  	next_ctx = next->perf_event_ctxp[ctxn];
> -	if (!next_ctx)
> +	if (!next_ctx || !next_ctx->is_active)
>  		goto unlock;

AFAICT this completely kills off the optimization. next_ctx->is_active
cannot be set at this point.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ