[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1405142246170.30015@vincent-weaver-1.umelst.maine.edu>
Date: Wed, 14 May 2014 22:55:40 -0400 (EDT)
From: Vince Weaver <vincent.weaver@...ne.edu>
To: Vince Weaver <vincent.weaver@...ne.edu>
cc: linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Paul Mackerras <paulus@...ba.org>,
Ingo Molnar <mingo@...hat.com>
Subject: Re: perfevents: irq loop stuck!
On Tue, 13 May 2014, Vince Weaver wrote:
> pe[32].sample_period=0xc0000000000000bd;
>
> Should it be possible to open an event with a large negative sample_period
> like that?
so this seems to be a real bug.
attr->sample_period is a u64 value, but internally it gets cast to
s64 and is added against itself and so all kinds of unexpected things
happen.
So if you set attr->sample_period to 0xc0000000000000bd in the hopes of
sampling the RETIRED_INSTRUCTIONS event every 5 years or so, instead
what happens is that in
x86_perf_event_set_period()
the value is cast to a signed 64-bit value, so we are now negative.
Then "left" is set to period because we are negative.
Then since left is less than 0, we double the period value.
This overflows the 64-bit integer and suddenly we are in undefined
behavior territory and we're lucky the C compiler doesn't decide to
format the hard drive.
Anyway we are still less than 0, so then the
if (unlikely(left < 2))
left = 2;
code kicks in and suddenly our hugely positive sample_period has changed
to just being "2". And so we suddenly get a storm of interrupts instead
of one every 5 years.
So, not sure how to fix this without a total re-write, unless we want to
cheat and just say sample_period is capped at 62-bits or something.
Also it's unclear why sometimes this can cause a stuck interrupt leading
to the "irq loop stuck" message. I have a reproducible fuzzer test case
that will cause this to happen, but can't isolate it down to a simple test
case...
Vince
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists