[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160425092035.GH3430@twins.programming.kicks-ass.net>
Date: Mon, 25 Apr 2016 11:20:35 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Vikas Shivappa <vikas.shivappa@...ux.intel.com>
Cc: tony.luck@...el.com, ravi.v.shankar@...el.com,
fenghua.yu@...el.com, vikas.shivappa@...el.com, x86@...nel.org,
linux-kernel@...r.kernel.org, hpa@...or.com, tglx@...utronix.de,
mingo@...nel.org, h.peter.anvin@...el.com
Subject: Re: [PATCH 1/4] perf/x86/cqm,mbm: Store cqm,mbm count for all events
when RMID is recycled
On Fri, Apr 22, 2016 at 05:27:18PM -0700, Vikas Shivappa wrote:
> During RMID recycling, when an event loses the RMID we saved the counter
> for group leader but it was not being saved for all the events in an
> event group. This would lead to a situation where if 2 perf instances
> are counting the same PID one of them would not see the updated count
> which other perf instance is seeing. This patch tries to fix the issue
> by saving the count for all the events in the same event group.
> @@ -486,14 +495,21 @@ static u32 intel_cqm_xchg_rmid(struct perf_event *group, u32 rmid)
> * If our RMID is being deallocated, perform a read now.
> */
> if (__rmid_valid(old_rmid) && !__rmid_valid(rmid)) {
>
> + rr = __init_rr(old_rmid, group->attr.config, 0);
> cqm_mask_call(&rr);
> local64_set(&group->count, atomic64_read(&rr.value));
> + list_for_each_entry(event, head, hw.cqm_group_entry) {
> + if (event->hw.is_group_event) {
> +
> + evttype = event->attr.config;
> + rr = __init_rr(old_rmid, evttype, 0);
> +
> + cqm_mask_call(&rr);
> + local64_set(&event->count,
> + atomic64_read(&rr.value));
Randomly indent much?
> + }
> + }
> }
Powered by blists - more mailing lists