[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150519105121.GG19282@twins.programming.kicks-ass.net>
Date: Tue, 19 May 2015 12:51:21 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Matt Fleming <matt@...eblueprint.co.uk>
Cc: Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
x86@...nel.org, Matt Fleming <matt.fleming@...el.com>,
Will Auld <will.auld@...el.com>,
Kanaka Juvva <kanaka.d.juvva@...el.com>
Subject: Re: [patch 3/6] x86, perf, cqm: Remove pointless spinlock from state
cache
On Tue, May 19, 2015 at 10:13:18AM +0100, Matt Fleming wrote:
> On Tue, 19 May, at 12:00:53AM, Thomas Gleixner wrote:
> > struct intel_cqm_state is a strict per cpu cache of the rmid and the
> > usage counter. It can never be modified from a remote cpu.
> >
> > The 3 functions which modify the content: start, stop and del (del
> > maps to stop) are called from the perf core with interrupts disabled
> > which is enough protection for the per cpu state values.
> >
> > Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
> > ---
> > arch/x86/kernel/cpu/perf_event_intel_cqm.c | 17 ++++++-----------
> > 1 file changed, 6 insertions(+), 11 deletions(-)
>
> The state locking code was taken from Peter's original patch last year,
> so it would be good for him to chime in that this is safe. It's probably
> just that it was necessary in Peter's patches but after I refactored
> bits I forgot to rip it out.
>
> But yeah, from reading the code again the lock does look entirely
> superfluous.
I think that all stems from a point in time when it wasn't at all clear
to me what the hardware looked like, but what do I know, I can't even
remember last week.
All the patches looked good to me, so I already queued them.
I'll add your Ack on them.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists