lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6467b30e-26a5-444c-bc20-5be7690e1e4c@linux.intel.com>
Date: Tue, 17 Dec 2024 15:59:59 -0500
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, acme@...nel.org, namhyung@...nel.org,
 irogers@...gle.com, linux-kernel@...r.kernel.org,
 linux-perf-users@...r.kernel.org, ak@...ux.intel.com, eranian@...gle.com
Subject: Re: [PATCH V5 4/4] perf/x86/intel: Support PEBS counters snapshotting



On 2024-12-17 3:29 p.m., Peter Zijlstra wrote:
> On Tue, Dec 17, 2024 at 12:45:56PM -0500, Liang, Kan wrote:
> 
> 
>>> Why can't you use something like the below -- that gives you a count
>>> value matching the pmc value you put in, as long as it is 'near' the
>>> current value.
>>>
>>> ---
>>> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
>>> index 8f218ac0d445..3cf8b4f2b2c1 100644
>>> --- a/arch/x86/events/core.c
>>> +++ b/arch/x86/events/core.c
>>> @@ -154,6 +154,26 @@ u64 x86_perf_event_update(struct perf_event *event)
>>>  	return new_raw_count;
>>>  }
>>>  
>>> +u64 x86_perf_event_pmc_to_count(struct perf_event *event, u64 pmc)
>>> +{
>>> +	struct hw_perf_event *hwc = &event->hw;
>>> +	int shift = 64 - x86_pmu.cntval_bits;
>>> +	u64 prev_pmc, prev_count;
>>> +	u64 delta;
>>> +
>>> +	do {
>>> +		prev_pmc = local64_read(&hwc->prev_count);
>>> +		barrier();
>>> +		prev_count = local64_read(&event->count);
>>> +		barrier();
>>> +	} while (prev_pmc != local64_read(&hwc->prev_count));
>>
>> Is the "while()" to handle PMI? But there should be no PMI, since the
>> PMU has been disabled when draining the PEBS buffer.
> 
> Perhaps not in your case, but this way the function is more widely
> usable.
> 
>> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
>> index e06ac9a3cdf8..7f0b850f7277 100644
>> --- a/arch/x86/events/intel/ds.c
>> +++ b/arch/x86/events/intel/ds.c
>> @@ -1969,6 +1969,23 @@ static void adaptive_pebs_save_regs(struct
>> pt_regs *regs,
>>
>>  #define PEBS_LATENCY_MASK			0xffff
>>
>> +void intel_perf_event_pmc_to_count(struct perf_event *event, u64 pmc)
>> +{
>> +	struct hw_perf_event *hwc = &event->hw;
>> +	int shift = 64 - x86_pmu.cntval_bits;
>> +	u64 prev_pmc;
>> +	u64 delta;
>> +
>> +	prev_pmc = local64_read(&hwc->prev_count);
>> +
>> +	delta = (pmc << shift) - (prev_pmc << shift);
>> +	delta >>= shift;
>> +
>> +	local64_add(delta, &event->count);
>> +	local64_sub(delta, &hwc->period_left);
>> +	local64_set(&hwc->prev_count, pmc);
>> +}
> 
> This seems very fragile, at least keep the same store order and assert
> you're in NMI/PMI context.

Sure, I will keep the store order.
You mean assert when there may be an unexpected NMI for the normal drain
case, right? I think we can check if the PMU is disabled as below.

@@ -1974,12 +1974,15 @@ static void intel_perf_event_pmc_to_count(struct
perf_event *event, u64 pmc)
 	int shift = 64 - x86_pmu.cntval_bits;
 	u64 delta;

+	/* Only read update the count when the PMU is disabled */
+	WARN_ON(this_cpu_read(cpu_hw_events.enabled));
+	local64_set(&hwc->prev_count, pmc);
+
 	delta = (pmc << shift) - (prev_pmc << shift);
 	delta >>= shift;

 	local64_add(delta, &event->count);
 	local64_sub(delta, &hwc->period_left);
-	local64_set(&hwc->prev_count, pmc);
 }

Thanks,
Kan



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ