[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130128231659.GW30577@one.firstfloor.org>
Date: Tue, 29 Jan 2013 00:16:59 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Stephane Eranian <eranian@...gle.com>
Cc: Andi Kleen <andi@...stfloor.org>, Ingo Molnar <mingo@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Andrew Morton <akpm@...ux-foundation.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Jiri Olsa <jolsa@...hat.com>,
Namhyung Kim <namhyung@...nel.org>,
Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH 07/12] perf, x86: Avoid checkpointed counters causing excessive TSX aborts v3
> I don't buy really this workaround. You are assuming you're always
> measuring INTC_CHECKPOINTED
> event by itself.
There's no such assumption.
> So what if you get into the handler because of an PMI
> due to an overflow
> of another counter which is active at the same time as counter2?
> You're going to artificially
> add an overflow to counter2. Unless you're enforcing only counter2 in use.
All the code does it to always check the counter. There's no
"overflow added". For counting it may be set back and accumulated
a bit earlier than normal, but that's no problem. This will only
happen for a checkpointed counter 2, not for anything else.
> The counter is reinstated to its state before the critical section but
> the PMI cannot be
> cancelled and there is no state left behind to tell what to do with it.
The PMI is effectively spurious, but we use it to set back. Don't know
what you mean with "cancel". It already happened of course.
> static inline bool is_event_intx_cp(struct perf_event *event)
> {
> return event && (event->hw.config & HSW_INTX_CHECKPOINTED);
> }
They both look the same to me.
>
>
> > for_each_set_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) {
> > struct perf_event *event = cpuc->events[bit];
> >
> > @@ -1615,6 +1635,20 @@ static int hsw_hw_config(struct perf_event *event)
> > ((event->hw.config & ARCH_PERFMON_EVENTSEL_ANY) ||
> > event->attr.precise_ip > 0))
> > return -EIO;
> > + if (event->hw.config & HSW_INTX_CHECKPOINTED) {
> > + /*
> > + * Sampling of checkpointed events can cause situations where
> > + * the CPU constantly aborts because of a overflow, which is
> > + * then checkpointed back and ignored. Forbid checkpointing
> > + * for sampling.
> > + *
> > + * But still allow a long sampling period, so that perf stat
> > + * from KVM works.
> > + */
>
> What has perf stat have to do with sample_period?
It always uses a period to accumulate in a larger counter as you probably know.
Also with the other code we only allow checkpoint with stat.
>
> > + if (event->attr.sample_period > 0 &&
> > + event->attr.sample_period < 0x7fffffff)
> > + return -EIO;
> > + }
> same comment about -EIO vs. EOPNOTSUPP. sample_period is u64
> so, it's always >= 0. Where does this 31-bit limit come from?
That's what perf stat uses when running in the KVM guest.
> Experimentation?
The code does > 0, not >= 0
> Could be written:
> if (event->attr.sample_period && event->attr.sample_period < 0x7fffffff)
That's 100% equivalent to what I wrote.
I can change the error value.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists