[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ywc+Kc7p9svJ79ml@worktop.programming.kicks-ass.net>
Date: Thu, 25 Aug 2022 11:17:29 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Ravi Bangoria <ravi.bangoria@....com>
Cc: acme@...nel.org, alexander.shishkin@...ux.intel.com,
jolsa@...hat.com, namhyung@...nel.org, songliubraving@...com,
eranian@...gle.com, alexey.budankov@...ux.intel.com,
ak@...ux.intel.com, mark.rutland@....com, megha.dey@...el.com,
frederic@...nel.org, maddy@...ux.ibm.com, irogers@...gle.com,
kim.phillips@....com, linux-kernel@...r.kernel.org,
santosh.shukla@....com
Subject: Re: [RFC v2] perf: Rewrite core context handling
On Thu, Aug 25, 2022 at 11:09:05AM +0530, Ravi Bangoria wrote:
> > -static inline int __pmu_filter_match(struct perf_event *event)
> > -{
> > - struct pmu *pmu = event->pmu;
> > - return pmu->filter_match ? pmu->filter_match(event) : 1;
> > -}
> > -
> > -/*
> > - * Check whether we should attempt to schedule an event group based on
> > - * PMU-specific filtering. An event group can consist of HW and SW events,
> > - * potentially with a SW leader, so we must check all the filters, to
> > - * determine whether a group is schedulable:
> > - */
> > -static inline int pmu_filter_match(struct perf_event *event)
> > -{
> > - struct perf_event *sibling;
> > -
> > - if (!__pmu_filter_match(event))
> > - return 0;
> > -
> > - for_each_sibling_event(sibling, event) {
> > - if (!__pmu_filter_match(sibling))
> > - return 0;
> > - }
> > -
> > - return 1;
> > -}
> > -
> > static inline int
> > event_filter_match(struct perf_event *event)
> > {
> > return (event->cpu == -1 || event->cpu == smp_processor_id()) &&
> > - perf_cgroup_match(event) && pmu_filter_match(event);
> > + perf_cgroup_match(event);
>
> There are many callers of event_filter_match() which might not endup calling
> visit_groups_merge(). I hope this is intentional change?
I thought I did, but lets go through them again.
event_filter_match() is called from:
- __perf_event_enable(); here we'll end up in ctx_sched_in() which
will dutifully skip the pmu in question.
(fwiw, this is one of those sites where ctx_sched_{out,in}() could do
with a @pmu argument.
- merge_sched_in(); this is after the new callsite in
visit_groups_merge().
- perf_adjust_freq_unthrottle_context(); if the pmu was skipped in
visit_groups_merge() then ->state != ACTIVE and we'll bail out.
- perf_iterate_ctx() / perf_iterate_sb_cpu(); these are for generating
side-band events, and arguably not delivering them when running on
the 'wrong' CPU wasn't right to begin with.
So I tihnk we're good. Hmm?
Powered by blists - more mailing lists