lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4910A390.409@zytor.com>
Date:	Tue, 04 Nov 2008 11:33:36 -0800
From:	"H. Peter Anvin" <hpa@...or.com>
To:	Alexander van Heukelum <heukelum@...tmail.fm>
CC:	Andi Kleen <andi@...stfloor.org>,
	Cyrill Gorcunov <gorcunov@...il.com>,
	Alexander van Heukelum <heukelum@...lshack.com>,
	LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>, lguest@...abs.org,
	jeremy@...source.com, Steven Rostedt <srostedt@...hat.com>,
	Mike Travis <travis@....com>
Subject: Re: [PATCH RFC/RFB] x86_64, i386: interrupt dispatch changes

Okay, looking at this some more, the current interrupt stubs are just
plain braindead.

We have a large number of push instructions which save a negative
number, even when that means using the full 5-byte form; then we use:

	unsigned vector = ~regs->orig_ax;

in do_IRQ.  This is utterly moronic; if we use the short form push at
all times, then we can set the upper bits (which distinguish us from a
system call entry) at leisure (via a simple orl in common code), rather
than in each stub, which to boot bloats it above the 8-byte mark.

That way, each stub has the simple form:

	6A xx E9 yy yy yy yy 90

Down to 8 bytes, including one byte of padding.  Already better - we are
down to 2K total, and each stub is aligned.

Now, we can do better than that at the cost of an extra branch.  The
extra branch, however, is a direct unconditional branch and so is not
subject to misprediction (good), although it may end up taking an extra
icache miss (bad):

we can group our vectors in 8 groups of 32 vectors each.  Each contain a
stub of the form:

	6A xx EB yy

... which jump to a common jump instruction at the end of each group.
Thus, each group takes 32*4+5 bytes+3 bytes for alignment = 136 bytes,
for a total of 1088 bytes.

This has two disadvantages:
- an extra jump.
- we can no longer redirect a stub away from common code by
  changing the branch in that slot.  We have to instead modify
  the IDT.  This means "dedicated" interrupts don't get the
  vector number at all, which is probably fine -- to be honest,
  I'm not sure if they do at the moment either.

Fixing the first of these I think is a no-brainer.  That will cut the
size of the existing stub pool by almost half.  The second is more of a
judgement call, and I'd like to see performance numbers for it.  Either
which way, I think it's worthwhile to consider this as an alternative to
  playing segmentation tricks, which I think could have really nasty
side effects.

	-hpa

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ