lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b69595c9-5240-40ea-89e6-c36331ca245c@linux.intel.com>
Date: Fri, 14 Mar 2025 14:48:00 -0400
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, acme@...nel.org, namhyung@...nel.org,
 irogers@...gle.com, adrian.hunter@...el.com, ak@...ux.intel.com,
 linux-kernel@...r.kernel.org, eranian@...gle.com, thomas.falcon@...el.com
Subject: Re: [PATCH V2 3/3] perf/x86/intel: Support auto counter reload



On 2025-03-14 9:48 a.m., Liang, Kan wrote:
>>> +	}
>>> +}
>>> +
>>> +static int intel_pmu_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
>>> +{
>>> +	struct perf_event *event;
>>> +	int ret = x86_schedule_events(cpuc, n, assign);
>>> +
>>> +	if (ret)
>>> +		return ret;
>>> +
>>> +	if (cpuc->is_fake)
>>> +		return ret;
>>> +
>>> +	event = cpuc->event_list[n - 1];
>> ISTR seeing this pattern before somewhere and then argued it was all
>> sorts of broken. Why is it sane to look at the last event here?
> The schedule_events() is invoked for only two cases, a new event or a
> new group. Since the event_list[] is in enabled order, the last event
> should be either the new event or the last event of the new group.
> 
> The is_acr_event_group() always checks the leader's flag. It doesn't
> matter which event in the ACR group is used to do the check.
> 
> Checking the last event should be good enough to cover both cases.

This is an old implementation. Actually, I once sent a V3 last month
which move the codes to late_setup(). The late_setup was introduced
by the counters snapshotting feature. It does a late configuration in
the x86_pmu_enable() after the counters are assigned.
https://lore.kernel.org/lkml/173874832555.10177.18398857610370220622.tip-bot2@tip-bot2/

We don't need to check the last event anymore.

The V3 optimize the late_setup() a little bit.
https://lore.kernel.org/lkml/20250213211718.2406744-3-kan.liang@linux.intel.com/

and extend it for both counters snapshotting and ACR.
https://lore.kernel.org/lkml/20250213211718.2406744-6-kan.liang@linux.intel.com/

But other comments still stand. I will send a V4 later.

Thanks,
Kan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ