[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y0VTn0qLWd925etP@hirez.programming.kicks-ass.net>
Date: Tue, 11 Oct 2022 13:29:35 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Ravi Bangoria <ravi.bangoria@....com>
Cc: acme@...nel.org, alexander.shishkin@...ux.intel.com,
jolsa@...hat.com, namhyung@...nel.org, songliubraving@...com,
eranian@...gle.com, ak@...ux.intel.com, mark.rutland@....com,
frederic@...nel.org, maddy@...ux.ibm.com, irogers@...gle.com,
will@...nel.org, robh@...nel.org, mingo@...hat.com,
catalin.marinas@....com, ndesaulniers@...gle.com,
srw@...dewatkins.net, linux-arm-kernel@...ts.infradead.org,
linux-perf-users@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org,
sandipan.das@....com, ananth.narayan@....com, kim.phillips@....com,
santosh.shukla@....com
Subject: Re: [PATCH v2] perf: Rewrite core context handling
On Sat, Oct 08, 2022 at 11:54:24AM +0530, Ravi Bangoria wrote:
> +static void perf_event_swap_task_ctx_data(struct perf_event_context *prev_ctx,
> + struct perf_event_context *next_ctx)
> +{
> + struct perf_event_pmu_context *prev_epc, *next_epc;
> +
> + if (!prev_ctx->nr_task_data)
> + return;
> +
> + prev_epc = list_first_entry(&prev_ctx->pmu_ctx_list,
> + struct perf_event_pmu_context,
> + pmu_ctx_entry);
> + next_epc = list_first_entry(&next_ctx->pmu_ctx_list,
> + struct perf_event_pmu_context,
> + pmu_ctx_entry);
> +
> + while (&prev_epc->pmu_ctx_entry != &prev_ctx->pmu_ctx_list &&
> + &next_epc->pmu_ctx_entry != &next_ctx->pmu_ctx_list) {
> +
> + WARN_ON_ONCE(prev_epc->pmu != next_epc->pmu);
> +
> + /*
> + * PMU specific parts of task perf context can require
> + * additional synchronization. As an example of such
> + * synchronization see implementation details of Intel
> + * LBR call stack data profiling;
> + */
> + if (prev_epc->pmu->swap_task_ctx)
> + prev_epc->pmu->swap_task_ctx(prev_epc, next_epc);
> + else
> + swap(prev_epc->task_ctx_data, next_epc->task_ctx_data);
Did I forget to advance the iterators here?
> + }
> +}
Powered by blists - more mailing lists