lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 Apr 2010 12:31:06 +0200
From:	Robert Richter <robert.richter@....com>
To:	Stephane Eranian <eranian@...gle.com>
CC:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Ingo Molnar <mingo@...e.hu>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 11/12] perf, x86: implement AMD IBS event configuration

On 19.04.10 15:46:29, Stephane Eranian wrote:
> > +static inline void amd_pmu_disable_ibs(void)
> > +{
> > +       struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
> > +       u64 val;
> > +
> > +       if (test_bit(X86_PMC_IDX_SPECIAL_IBS_FETCH, cpuc->active_mask)) {
> > +               rdmsrl(MSR_AMD64_IBSFETCHCTL, val);
> > +               val &= ~IBS_FETCH_ENABLE;
> > +               wrmsrl(MSR_AMD64_IBSFETCHCTL, val);
> > +       }
> > +       if (test_bit(X86_PMC_IDX_SPECIAL_IBS_OP, cpuc->active_mask)) {
> > +               rdmsrl(MSR_AMD64_IBSOPCTL, val);
> > +               val &= ~IBS_OP_ENABLE;
> > +               wrmsrl(MSR_AMD64_IBSOPCTL, val);
> > +       }
> > +}
> > +
> > +static inline void amd_pmu_enable_ibs(void)
> > +{
> > +       struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
> > +       u64 val;
> > +
> > +       if (test_bit(X86_PMC_IDX_SPECIAL_IBS_FETCH, cpuc->active_mask)) {
> > +               rdmsrl(MSR_AMD64_IBSFETCHCTL, val);
> > +               val |= IBS_FETCH_ENABLE;
> > +               wrmsrl(MSR_AMD64_IBSFETCHCTL, val);
> > +       }
> > +       if (test_bit(X86_PMC_IDX_SPECIAL_IBS_OP, cpuc->active_mask)) {
> > +               rdmsrl(MSR_AMD64_IBSOPCTL, val);
> > +               val |= IBS_OP_ENABLE;
> > +               wrmsrl(MSR_AMD64_IBSOPCTL, val);
> > +       }
> > +}
> 
> Aren't you picking up stale state by using read-modify-write here?

Hmm, there is a reason for this implementation. Both functions are
only used in .enable_all() and .disable_all() which was originally
used in atomic sections to disable NMIs during event scheduling and
setup. For this short switch off of the pmu the register state is not
saved. On Intel this is implemented by only writing to
MSR_CORE_PERF_GLOBAL_CTRL. The proper way to enable events is to use
amd_pmu_enable_event() which is x86_pmu.enable(event).

Locking at the latest sources this might have changed a little in
between and we have to check that this functions above are not used to
enable events. So I am not really sure if the register may be setup
wrong. But if this happens, then only for the first sample after
enabling or reenabling of the whole pmu since the interrupt handler is
using __x86_pmu_enable_event() to reenable ibs. Thus I would prever
the implementation above and instead reimplement the nmi handling (see
below).

Also, We should avoid a global pmu disable generally since it is
expensive and also the pmu state may not be restored properly for some
events on some hw revisions. But the code must then be atomic.

An alternative solution to disable NMIs on AMD could be to mask the
nmi by writing to the lapic instead of the counter msrs. This could be
more efficient and the pmu will go on with sampling without the need
to restore the state. Or this way: the interrupt handler does not
handle events and only clears the reason if some 'atomic' flag is set?

IMHO I think, also the implementation for x86_pmu_enable_all() and
x86_pmu_disable_all() is not accurate, since the state is not stored
when disabling all counters and then reenabling it with the init
value. On Intel counters this is without impact since the global ctrl
msr is used. In all other cases x86_pmu_enable_all() does not restore
the previous counter state.

-Robert

-- 
Advanced Micro Devices, Inc.
Operating System Research Center
email: robert.richter@....com

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ