[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YwR9ShCHDBgrvT9s@worktop.programming.kicks-ass.net>
Date: Tue, 23 Aug 2022 09:10:02 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Ravi Bangoria <ravi.bangoria@....com>
Cc: acme@...nel.org, alexander.shishkin@...ux.intel.com,
jolsa@...hat.com, namhyung@...nel.org, songliubraving@...com,
eranian@...gle.com, alexey.budankov@...ux.intel.com,
ak@...ux.intel.com, mark.rutland@....com, megha.dey@...el.com,
frederic@...nel.org, maddy@...ux.ibm.com, irogers@...gle.com,
kim.phillips@....com, linux-kernel@...r.kernel.org,
santosh.shukla@....com
Subject: Re: [RFC v2] perf: Rewrite core context handling
On Tue, Aug 02, 2022 at 11:43:03AM +0530, Ravi Bangoria wrote:
> [...]
>
> > /*
> > @@ -2718,7 +2706,6 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
> > struct perf_event_context *task_ctx,
> > enum event_type_t event_type)
> > {
> > - enum event_type_t ctx_event_type;
> > bool cpu_event = !!(event_type & EVENT_CPU);
> >
> > /*
> > @@ -2728,11 +2715,13 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
> > if (event_type & EVENT_PINNED)
> > event_type |= EVENT_FLEXIBLE;
> >
> > - ctx_event_type = event_type & EVENT_ALL;
> > + event_type &= EVENT_ALL;
> >
> > - perf_pmu_disable(cpuctx->ctx.pmu);
> > - if (task_ctx)
> > - task_ctx_sched_out(cpuctx, task_ctx, event_type);
> > + perf_ctx_disable(&cpuctx->ctx);
> > + if (task_ctx) {
> > + perf_ctx_disable(task_ctx);
> > + task_ctx_sched_out(task_ctx, event_type);
> > + }
> >
> > /*
> > * Decide which cpu ctx groups to schedule out based on the types
> > @@ -2742,17 +2731,20 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
> > * - otherwise, do nothing more.
> > */
> > if (cpu_event)
> > - cpu_ctx_sched_out(cpuctx, ctx_event_type);
> > - else if (ctx_event_type & EVENT_PINNED)
> > - cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
> > + ctx_sched_out(&cpuctx->ctx, event_type);
> > + else if (event_type & EVENT_PINNED)
> > + ctx_sched_out(&cpuctx->ctx, EVENT_FLEXIBLE);
> >
> > perf_event_sched_in(cpuctx, task_ctx, current);
> > - perf_pmu_enable(cpuctx->ctx.pmu);
> > +
> > + perf_ctx_enable(&cpuctx->ctx);
> > + if (task_ctx)
> > + perf_ctx_enable(task_ctx);
> > }
>
> ctx_resched() reschedule entire perf_event_context while adding new event
> to the context or enabling existing event in the context. We can probably
> optimize it by rescheduling only affected pmu_ctx.
Yes, it would probably make sense to add a pmu argument there and limit
the rescheduling where possible.
Powered by blists - more mailing lists