We have a function that does exactly what we want here, use it. This reduces the amount of cpuctx->task_ctx muckery. Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2545,8 +2545,7 @@ static void perf_event_context_sched_out if (do_switch) { raw_spin_lock(&ctx->lock); - ctx_sched_out(ctx, cpuctx, EVENT_ALL); - cpuctx->task_ctx = NULL; + task_ctx_sched_out(cpuctx, ctx); raw_spin_unlock(&ctx->lock); } }