[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1226629776.3343.84.camel@calx>
Date: Thu, 13 Nov 2008 20:29:36 -0600
From: Matt Mackall <mpm@...enic.com>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: Alexander van Heukelum <heukelum@...tmail.fm>,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <andi@...stfloor.org>,
Cyrill Gorcunov <gorcunov@...il.com>,
Alexander van Heukelum <heukelum@...lshack.com>,
LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>, lguest@...abs.org,
jeremy@...source.com, Steven Rostedt <srostedt@...hat.com>,
Mike Travis <travis@....com>
Subject: Re: [PATCH RFC/RFB] x86_64, i386: interrupt dispatch changes
On Thu, 2008-11-13 at 17:18 -0800, H. Peter Anvin wrote:
> Matt Mackall wrote:
> > On Mon, 2008-11-10 at 21:00 -0800, H. Peter Anvin wrote:
> >> Okay, after spending most of the day trying to get something that isn't
> >> completely like white noise (interesting problem, otherwise I'd have
> >> given up long ago) I did, eventually, come up with something that looks
> >> like it's significant. I did a set of multiple runs, and am looking for
> >> the "waterfall points" in the cumulative statistics.
> >>
> >> http://www.zytor.com/~hpa/baseline-hpa-3000-3600.pdf
> >>
> >> This particular set of data points was gathered on a 64-bit kernel, so I
> >> didn't try the segment technique.
> >>
> >> It looks to me that the collection of red lines is enough to the left of
> >> the black ones that one can assume there is a significant effect,
> >> probably by about a cache miss worth of time.
> >
> > This graph is a little confusing. Is the area under each curve here
> > supposed to be a constant?
> >
>
> No, they reflect individual runs. They start at 1 at the top left and
> drop to 0 at the far right in each case. What matters is the horizontal
> position of large vertical drops.
Still confused. If, say, the top blue line on the left represents the
same number of interrupts as the bottom red one, then at some point it
must cross under the red one as it goes to the right, which it does not
appear to do. Thus, it does not appear the scale on the left is actually
in units of constant probability, no?
Though I'll agree that even if they're not scaled so that the area under
the curve sums to a probability of 1, the centerpoint of the vertical
drop is what matters. But that's rather hard to read off this chart, as
the blue line I mentioned has a center point point well above the red
one, so while it looks like a shift of 80 cycles, it's more like 30.
Is there any theoretical reason you can't just sum the histograms for
runs of the same code and then divide by event count? Is there some sort
of alignment/cache-coloring issue across boots?
> > Is this latency from all interrupts as seen by userspace? Or does a
> > particular interrupt dominate?
> >
>
> All interrupts, but rather inherently the difference between interrupt
> handlers is going to be bigger than the differences between
> implementations of the same handler. I *believe* all the interrupts
> you're seeing in that graph are probably timer interrupts. The other
> major interrupt source that was active on the system was USB.
That's what I'd expect on an idle system, certainly.
Anyway, I'm actually surprised your red graph is visibly better than
your blue one. FWIW, I was leaning towards your simpler variant (and
away from the magical segment register proposal). I'd be happy to see
either of your versions submitted.
--
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists