[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140210180819.GC27965@twins.programming.kicks-ass.net>
Date: Mon, 10 Feb 2014 19:08:19 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Mark Rutland <mark.rutland@....com>
Cc: linux-kernel@...r.kernel.org, will.deacon@....com,
dave.martin@....com, Ingo Molnar <mingo@...hat.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH 6/7] perf: Centralise context pmu disabling
On Mon, Feb 10, 2014 at 05:44:23PM +0000, Mark Rutland wrote:
> Commit 443772776c69 (perf: Disable all pmus on unthrottling and
> rescheduling) identified an issue with having multiple PMUs sharing a
> perf_event_context, but only partially solved the issue.
>
> While ctx::pmu will be disabled across all of its events being
> scheduled, pmus which are not ctx::pmu will be repeatedly enabled and
> disabled between events being added, possibly counting imbetween
> pmu::add calls. This could be expensive and could lead to events
> counting for differing periods.
>
> Instead, this patch adds new helpers to disable/enable all pmus which
> have events in a context. While perf_pmu_{dis,en}able may be called
> repeatedly for a particular pmu, disabling is reference counted such
> that the real pmu::{dis,en}able callbacks are only called once (were
> this not the case, the current code would be broken for ctx::pmu).
>
> Uses of perf_pmu{disable,enable}(ctx->pmu) are replaced with
> perf_ctx_pmus_{disable,enable}(ctx). The now unnecessary calls to
> perf_pmu_enable and perf_pmu_disable added by 443772776c69 are removed.
Hurmn; instead of adding more for_each_event iterations we should be
reducing them.
Given that we currently schedule first to last and stop on the first
event that fails to schedule, we can terminate the ctx_sched_out() loop
when it finds the first event that wasn't actually scheduled.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists