lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100917085124.GK13563@erda.amd.com>
Date:	Fri, 17 Sep 2010 10:51:24 +0200
From:	Robert Richter <robert.richter@....com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Ingo Molnar <mingo@...e.hu>, Don Zickus <dzickus@...hat.com>,
	"gorcunov@...il.com" <gorcunov@...il.com>,
	"fweisbec@...il.com" <fweisbec@...il.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"ying.huang@...el.com" <ying.huang@...el.com>,
	"ming.m.lin@...el.com" <ming.m.lin@...el.com>,
	"yinghai@...nel.org" <yinghai@...nel.org>,
	"andi@...stfloor.org" <andi@...stfloor.org>,
	"eranian@...gle.com" <eranian@...gle.com>
Subject: Re: [PATCH] perf, x86: catch spurious interrupts after disabling
 counters

On 16.09.10 13:34:40, Peter Zijlstra wrote:
> On Wed, 2010-09-15 at 18:20 +0200, Robert Richter wrote:
> > Some cpus still deliver spurious interrupts after disabling a counter.
> > This caused 'undelivered NMI' messages. This patch fixes this.
> > 
> I tried the below and that also seems to work.. So yeah, looks like
> we're getting late NMIs.

I would rather prefer the fix I sent. This patch does a rdmsrl() with
each nmi on every inactive counter. It also changes the counter value
of all inactive counters, thus restarting a counter by only setting
the enable bit may start with an unexpected counter value (didn't look
at current implementation if this could be a problem).

It is also not possible to detect with hardware, which counter fired
the interrupt. We cannot assume a counter overflowed by just reading
the upper bit of the counter value. We must track this in software.

-Robert

> 
> ---
>  arch/x86/kernel/cpu/perf_event.c |   21 ++++++++++++++++++++-
>  1 files changed, 20 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
> index 0fb1705..9a261ac 100644
> --- a/arch/x86/kernel/cpu/perf_event.c
> +++ b/arch/x86/kernel/cpu/perf_event.c
> @@ -1145,6 +1145,22 @@ static void x86_pmu_del(struct perf_event *event, int flags)
>  	perf_event_update_userpage(event);
>  }
>  
> +static int fixup_overflow(int idx)
> +{
> +	u64 val;
> +
> +	rdmsrl(x86_pmu.perfctr + idx, val);
> +	if (!(val & (1ULL << (x86_pmu.cntval_bits - 1)))) {
> +		val = (u64)(-x86_pmu.max_period);
> +		val &= x86_pmu.cntval_mask;
> +		wrmsrl(x86_pmu.perfctr + idx, val);
> +
> +		return 1;
> +	}
> +
> +	return 0;
> +}
> +
>  static int x86_pmu_handle_irq(struct pt_regs *regs)
>  {
>  	struct perf_sample_data data;
> @@ -1159,8 +1175,11 @@ static int x86_pmu_handle_irq(struct pt_regs *regs)
>  	cpuc = &__get_cpu_var(cpu_hw_events);
>  
>  	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
> -		if (!test_bit(idx, cpuc->active_mask))
> +		if (!test_bit(idx, cpuc->active_mask)) {
> +	       		if (fixup_overflow(idx))
> +				handled++;
>  			continue;
> +		}
>  
>  		event = cpuc->events[idx];
>  		hwc = &event->hw;
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

-- 
Advanced Micro Devices, Inc.
Operating System Research Center

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ