lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1604250925380.18257@vshiva-Udesk>
Date:	Mon, 25 Apr 2016 09:26:35 -0700 (PDT)
From:	Vikas Shivappa <vikas.shivappa@...el.com>
To:	Peter Zijlstra <peterz@...radead.org>
cc:	Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
	tony.luck@...el.com, ravi.v.shankar@...el.com,
	fenghua.yu@...el.com, vikas.shivappa@...el.com, x86@...nel.org,
	linux-kernel@...r.kernel.org, hpa@...or.com, tglx@...utronix.de,
	mingo@...nel.org, h.peter.anvin@...el.com
Subject: Re: [PATCH 1/4] perf/x86/cqm,mbm: Store cqm,mbm count for all events
 when RMID is recycled



On Mon, 25 Apr 2016, Peter Zijlstra wrote:

> On Fri, Apr 22, 2016 at 05:27:18PM -0700, Vikas Shivappa wrote:
>> During RMID recycling, when an event loses the RMID we saved the counter
>> for group leader but it was not being saved for all the events in an
>> event group. This would lead to a situation where if 2 perf instances
>> are counting the same PID one of them would not see the updated count
>> which other perf instance is seeing. This patch tries to fix the issue
>> by saving the count for all the events in the same event group.
>
>
>> @@ -486,14 +495,21 @@ static u32 intel_cqm_xchg_rmid(struct perf_event *group, u32 rmid)
>>  	 * If our RMID is being deallocated, perform a read now.
>>  	 */
>>  	if (__rmid_valid(old_rmid) && !__rmid_valid(rmid)) {
>>
>> +		rr = __init_rr(old_rmid, group->attr.config, 0);
>>  		cqm_mask_call(&rr);
>>  		local64_set(&group->count, atomic64_read(&rr.value));
>> +		list_for_each_entry(event, head, hw.cqm_group_entry) {
>> +			if (event->hw.is_group_event) {
>> +
>> +				evttype = event->attr.config;
>> +				rr = __init_rr(old_rmid, evttype, 0);
>> +
>> +				cqm_mask_call(&rr);
>> +					local64_set(&event->count,
>> +						    atomic64_read(&rr.value));
>
> Randomly indent much?

Will fix. It has been added by mistake in advance for the next patch

Thanks,
Vikas

>
>> +			}
>> +		}
>>  	}
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ