lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 25 Apr 2016 11:04:38 -0700 (PDT)
From:	Vikas Shivappa <vikas.shivappa@...el.com>
To:	Peter Zijlstra <peterz@...radead.org>
cc:	Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
	tony.luck@...el.com, ravi.v.shankar@...el.com,
	fenghua.yu@...el.com, vikas.shivappa@...el.com, x86@...nel.org,
	linux-kernel@...r.kernel.org, hpa@...or.com, tglx@...utronix.de,
	mingo@...nel.org, h.peter.anvin@...el.com
Subject: Re: [PATCH 2/4] perf/x86/mbm: Store bytes counted for mbm during
 recycle



On Mon, 25 Apr 2016, Peter Zijlstra wrote:

> On Fri, Apr 22, 2016 at 05:27:19PM -0700, Vikas Shivappa wrote:
>> +static inline void mbm_set_rccount(
>> +			  struct perf_event *event, struct rmid_read *rr)
>
> That's horrible style, the 'normal' style is something like:
>
> static inline
> void mbm_set_rccount(struct perf_event *event, struct rmid_read *rr)
> {
> }

Will fix.. Thanks for pointing out.

>
>> @@ -1244,8 +1270,16 @@ static u64 intel_cqm_event_count(struct perf_event *event)
>>  	cqm_mask_call(&rr);
>>
>>  	raw_spin_lock_irqsave(&cache_lock, flags);
>> -	if (event->hw.cqm_rmid == rr.rmid)
>> -		local64_set(&event->count, atomic64_read(&rr.value));
>> +	if (event->hw.cqm_rmid == rr.rmid) {
>> +		if (is_mbm_event(event->attr.config)) {
>> +			tmpval = atomic64_read(&rr.value) +
>> +				local64_read(&event->hw.rc_count);
>> +
>> +			local64_set(&event->count, tmpval);
>> +		} else {
>> +			local64_set(&event->count, atomic64_read(&rr.value));
>> +		}
>> +	}
>>  	raw_spin_unlock_irqrestore(&cache_lock, flags);
>>  out:
>>  	return __perf_event_count(event);
>
> This is a 'creative' solution; why don't you do the normal thing, which
> is:
>
> start:
> 	prev_count = read_hw_counter();
>
> read:
> 	do {
> 		prev = prev_count;
> 		cur_val = read_hw_counter();
> 		delta = cur_val - prev;
> 	} while (local_cmpxchg(&prev_count, prev, cur_val) != prev);
> 	count += delta;


I may need to update the comment.

rc_count stores the total bytes for RMIDs that were used for this 
event except for the count of current RMID.
Say an event used RMID(1) .. RMID(k) from init to read and it had 
RMID(k) when read was called, the rc_count stores the values read 
from RMID1 .. RMID(k-1).

For MBM the patch is trying to do:
count
= total_bytes of RMID(1) 
+ ... +total_bytes of RMID(k-1) + total_bytes of RMID(k))
= rc_count + total_bytes of RMID(k).

1. event1 init. rc_count = 0. event1 gets RMID1.
2. event1 loses RMID1 due to recycling. Current total_bytes for RMID1 is 50MB.
3. rc_count += 50MB.
4. event1 gets RMID2. total_bytes for RMID2 is set to zero.. basically do the 
prev_count = read_hw_counter()..
5. event1 loses RMID2 due to recycling. Current total_bytes for RMID2 30MB.
6. rc_count += 30MB.
7. event1 gets RMID3..
8. event1 read is called. total_bytes is 10MB (read from RMID3)..
9. count = rc_count(80MB) + 10MB (read from RMID3..)

Thanks,
Vikas

>
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ