[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTimeAbm6eHsf++-5OO6ba9zDYehw=W_zfbi+2Z=g@mail.gmail.com>
Date: Fri, 24 Sep 2010 15:38:15 +0200
From: Stephane Eranian <eranian@...gle.com>
To: Robert Richter <robert.richter@....com>
Cc: Don Zickus <dzickus@...hat.com>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
"gorcunov@...il.com" <gorcunov@...il.com>,
"fweisbec@...il.com" <fweisbec@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"ying.huang@...el.com" <ying.huang@...el.com>,
"ming.m.lin@...el.com" <ming.m.lin@...el.com>,
"yinghai@...nel.org" <yinghai@...nel.org>,
"andi@...stfloor.org" <andi@...stfloor.org>
Subject: Re: [PATCH] perf, x86: catch spurious interrupts after disabling counters
On Fri, Sep 24, 2010 at 12:03 PM, Robert Richter <robert.richter@....com> wrote:
> On 23.09.10 23:18:34, Don Zickus wrote:
>
>> > I was able to duplicate the problem and can confirm this patch fixes the
>> > issue for me. I tried poking around (similar to things Robert probably
>> > did) and had no luck. Something just doesn't make sense, but I guess for
>> > now this patch is good enough for me. :-)
>>
>> Ah ha! I figured out what the problem was, need to disable the pmu while
>> processing the nmi. :-) Finally something simple in this crazy unknown
>> NMI spree.
>>
>> Oh yeah and trace_printk is now my new favorite tool!
>>
>> From: Don Zickus <dzickus@...hat.com>
>> Date: Thu, 23 Sep 2010 22:52:09 -0400
>> Subject: [PATCH] x86, perf: disable pmu from counting when processing irq
>>
>> On certain AMD and Intel machines, the pmu was left enabled
>> while the counters were reset during handling of the NMI.
>> After the counters are reset, code continues to process an
>> overflow. During this time another counter overflow interrupt
>> could happen because the counter is still ticking. This leads to
>> an unknown NMI.
>
> I don't like the approach of disabling all counters in the nmi
> handler. First, it stops counting and thus may falsify
> measurement. Second, it introduces much overhead doing a rd-/wrmsrl()
> for each counter.
>
But that's exactly what is going on the Intel side. PMU is stopped on interrupt.
An argument for this is that you don't necessarily want to monitor across
the PMU handler, i.e., the overhead you introduce.
> Does your patch below solve something that my patch doesn't?
>
> Btw, Ingo, my patch should be applied to tip/perf/urgent as it fixes
> the regression you discovered on AMD systems.
>
> Thanks,
>
> -Robert
>
>>
>> static int x86_pmu_handle_irq(struct pt_regs *regs)
>> {
>>
>> <snip..>
>>
>> for (idx = 0; idx < x86_pmu.num_counters; idx++) {
>> if (!test_bit(idx, cpuc->active_mask))
>> continue;
>>
>> <snip..>
>>
>> counter reset--> if (!x86_perf_event_set_period(event))
>> continue;
>>
>> still ticking--> if (perf_event_overflow(event, 1, &data, regs))
>> <boom overflow>
>> stopped here --> x86_pmu_stop(event);
>>
>> The way to solve this is to disable the pmu while processing the
>> overflows and re-enable when done.
>>
>> Signed-off-by: Don Zickus <dzickus@...hat.com>
>> ---
>> arch/x86/kernel/cpu/perf_event.c | 4 ++++
>> 1 files changed, 4 insertions(+), 0 deletions(-)
>>
>> diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
>> index 48c6d8d..d4fe95d 100644
>> --- a/arch/x86/kernel/cpu/perf_event.c
>> +++ b/arch/x86/kernel/cpu/perf_event.c
>> @@ -1158,6 +1158,8 @@ static int x86_pmu_handle_irq(struct pt_regs *regs)
>>
>> cpuc = &__get_cpu_var(cpu_hw_events);
>>
>> + x86_pmu_disable_all();
>> +
>> for (idx = 0; idx < x86_pmu.num_counters; idx++) {
>> if (!test_bit(idx, cpuc->active_mask))
>> continue;
>> @@ -1182,6 +1184,8 @@ static int x86_pmu_handle_irq(struct pt_regs *regs)
>> x86_pmu_stop(event, 0);
>> }
>>
>> + x86_pmu_enable_all(0);
>> +
>> if (handled)
>> inc_irq_stat(apic_perf_irqs);
>>
>> --
>> 1.7.2.3
>>
>>
>
> --
> Advanced Micro Devices, Inc.
> Operating System Research Center
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists