lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <10d8889e-4ca9-7e4e-a3e4-d769da79d047@amd.com>
Date:   Thu, 25 Aug 2022 11:09:05 +0530
From:   Ravi Bangoria <ravi.bangoria@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     acme@...nel.org, alexander.shishkin@...ux.intel.com,
        jolsa@...hat.com, namhyung@...nel.org, songliubraving@...com,
        eranian@...gle.com, alexey.budankov@...ux.intel.com,
        ak@...ux.intel.com, mark.rutland@....com, megha.dey@...el.com,
        frederic@...nel.org, maddy@...ux.ibm.com, irogers@...gle.com,
        kim.phillips@....com, linux-kernel@...r.kernel.org,
        santosh.shukla@....com, ravi.bangoria@....com
Subject: Re: [RFC v2] perf: Rewrite core context handling

> -static inline int __pmu_filter_match(struct perf_event *event)
> -{
> -	struct pmu *pmu = event->pmu;
> -	return pmu->filter_match ? pmu->filter_match(event) : 1;
> -}
> -
> -/*
> - * Check whether we should attempt to schedule an event group based on
> - * PMU-specific filtering. An event group can consist of HW and SW events,
> - * potentially with a SW leader, so we must check all the filters, to
> - * determine whether a group is schedulable:
> - */
> -static inline int pmu_filter_match(struct perf_event *event)
> -{
> -	struct perf_event *sibling;
> -
> -	if (!__pmu_filter_match(event))
> -		return 0;
> -
> -	for_each_sibling_event(sibling, event) {
> -		if (!__pmu_filter_match(sibling))
> -			return 0;
> -	}
> -
> -	return 1;
> -}
> -
>  static inline int
>  event_filter_match(struct perf_event *event)
>  {
>  	return (event->cpu == -1 || event->cpu == smp_processor_id()) &&
> -	       perf_cgroup_match(event) && pmu_filter_match(event);
> +	       perf_cgroup_match(event);

There are many callers of event_filter_match() which might not endup calling
visit_groups_merge(). I hope this is intentional change?

>  }
>  
>  static void
> @@ -3661,6 +3634,9 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
>  	struct perf_event **evt;
>  	int ret;
>  
> +	if (pmu->filter && pmu->filter(pmu, cpu))
> +		return 0;
> +
>  	if (!ctx->task) {
>  		cpuctx = this_cpu_ptr(&cpu_context);
>  		event_heap = (struct min_heap){

Thanks,
Ravi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ