lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 13 Nov 2008 19:22:24 -0800
From:	"H. Peter Anvin" <hpa@...or.com>
To:	Matt Mackall <mpm@...enic.com>
CC:	Alexander van Heukelum <heukelum@...tmail.fm>,
	Ingo Molnar <mingo@...e.hu>, Andi Kleen <andi@...stfloor.org>,
	Cyrill Gorcunov <gorcunov@...il.com>,
	Alexander van Heukelum <heukelum@...lshack.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>, lguest@...abs.org,
	jeremy@...source.com, Steven Rostedt <srostedt@...hat.com>,
	Mike Travis <travis@....com>
Subject: Re: [PATCH RFC/RFB] x86_64, i386: interrupt dispatch changes

Matt Mackall wrote:
>>>
>> No, they reflect individual runs.  They start at 1 at the top left and
>> drop to 0 at the far right in each case.  What matters is the horizontal
>> position of large vertical drops.
> 
> Still confused. If, say, the top blue line on the left represents the
> same number of interrupts as the bottom red one, then at some point it
> must cross under the red one as it goes to the right, which it does not
> appear to do. Thus, it does not appear the scale on the left is actually
> in units of constant probability, no?
> 
> Though I'll agree that even if they're not scaled so that the area under
> the curve sums to a probability of 1, the centerpoint of the vertical
> drop is what matters. But that's rather hard to read off this chart, as
> the blue line I mentioned has a center point point well above the red
> one, so while it looks like a shift of 80 cycles, it's more like 30.
> 
> Is there any theoretical reason you can't just sum the histograms for
> runs of the same code and then divide by event count? Is there some sort
> of alignment/cache-coloring issue across boots?
> 

The reason for the multiple curves is to show the range of uncertainty.

There are three sets of graphs in there: black (current mainline,
16-byte stubs), red (4-byte stubs with a double jump), and blue (8-byte
stubs.)

The fact that the system is idle is pretty much a case which should
favor "blue" over "red"; realistically I think the graphs show that they
are identical within the limits of measurement, and both are
significantly better than "black".

Since this is pretty much the optimal case for "blue" and it doesn't
show any performance win over "red", I implemented the "red" option and
pushed it into tip:x86/irq.

> That's what I'd expect on an idle system, certainly.
> 
> Anyway, I'm actually surprised your red graph is visibly better than
> your blue one. FWIW, I was leaning towards your simpler variant (and
> away from the magical segment register proposal). I'd be happy to see
> either of your versions submitted.

Same here, which is why I wanted to check them both out.

	-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ