lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 17 May 2015 07:58:36 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Andy Lutomirski <luto@...capital.net>,
	Davidlohr Bueso <dave@...olabs.net>,
	Peter Anvin <hpa@...or.com>,
	Denys Vlasenko <dvlasenk@...hat.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Borislav Petkov <bp@...en8.de>,
	Peter Zijlstra <peterz@...radead.org>,
	"Chandramouleeswaran, Aswin" <aswin@...com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Brian Gerst <brgerst@...il.com>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Jason Low <jason.low2@...com>,
	"linux-tip-commits@...r.kernel.org" 
	<linux-tip-commits@...r.kernel.org>
Subject: Re: [tip:x86/asm] x86: Pack function addresses tightly as well


* Linus Torvalds <torvalds@...ux-foundation.org> wrote:

> On Fri, May 15, 2015 at 2:39 AM, tip-bot for Ingo Molnar
> <tipbot@...or.com> wrote:
> >
> > We can pack function addresses tightly as well:
> 
> So I really want to see performance numbers on a few 
> microarchitectures for this one in particular.
> 
> The kernel generally doesn't have loops (well, not the kinds of 
> high-rep loops that tend to be worth aligning), and I think the 
> general branch/loop alignment is likely fine. But the function 
> alignment doesn't tend to have the same kind of I$ advantages, it's 
> more lilely purely a size issue and not as interesting. Function 
> targets are also more likely to be not in the cache, I suspect, 
> since you don't have a loop priming it or a short forward jump that 
> just got the cacheline anyway. And then *not* aligning the function 
> would actually tend to make it *less* dense in the I$.
> 
> Put another way: I suspect this is more likely to hurt, and less 
> likely to help than the others.

Yeah, indeed.

So my thinking was that it would help, because:

  - There's often locality of reference between functions: we often
    have a handful of hot functions that are sitting next to each 
    other and they could thus be packed closer to each other this way, 
    creating a smaller net I$ footprint.

  - We have a handful of 'clusters' or small and often hot functions,
    especially in the locking code:

	ffffffff81893080 T _raw_spin_unlock_irqrestore
	ffffffff81893090 T _raw_read_unlock_irqrestore
	ffffffff818930a0 T _raw_write_unlock_irqrestore
	ffffffff818930b0 T _raw_spin_trylock_bh
	ffffffff81893110 T _raw_spin_unlock_bh
	ffffffff81893130 T _raw_read_unlock_bh
	ffffffff81893150 T _raw_write_unlock_bh
	ffffffff81893170 T _raw_read_trylock
	ffffffff818931a0 T _raw_write_trylock
	ffffffff818931d0 T _raw_read_lock_irqsave
	ffffffff81893200 T _raw_write_lock_irqsave
	ffffffff81893230 T _raw_spin_lock_bh
	ffffffff81893270 T _raw_spin_lock_irqsave
	ffffffff818932c0 T _raw_write_lock
	ffffffff818932e0 T _raw_write_lock_irq
	ffffffff81893310 T _raw_write_lock_bh
	ffffffff81893340 T _raw_spin_trylock
	ffffffff81893380 T _raw_read_lock
	ffffffff818933a0 T _raw_read_lock_irq
	ffffffff818933c0 T _raw_read_lock_bh
	ffffffff818933f0 T _raw_spin_lock
	ffffffff81893430 T _raw_spin_lock_irq
	ffffffff81893450

     That's 976 bytes total if 16 bytes aligned.

     With function packing, they compress into:

	ffffffff817f2458 T _raw_spin_unlock_irqrestore
	ffffffff817f2463 T _raw_read_unlock_irqrestore
	ffffffff817f2472 T _raw_write_unlock_irqrestore
	ffffffff817f247d T _raw_read_unlock_bh
	ffffffff817f2498 T _raw_write_unlock_bh
	ffffffff817f24af T _raw_spin_unlock_bh
	ffffffff817f24c6 T _raw_read_trylock
	ffffffff817f24ef T _raw_write_trylock
	ffffffff817f250e T _raw_spin_lock_bh
	ffffffff817f2536 T _raw_read_lock_irqsave
	ffffffff817f255e T _raw_write_lock_irqsave
	ffffffff817f2588 T _raw_spin_lock_irqsave
	ffffffff817f25be T _raw_spin_trylock_bh
	ffffffff817f25f6 T _raw_spin_trylock
	ffffffff817f2615 T _raw_spin_lock
	ffffffff817f2632 T _raw_spin_lock_irq
	ffffffff817f2650 T _raw_write_lock
	ffffffff817f266b T _raw_write_lock_irq
	ffffffff817f2687 T _raw_write_lock_bh
	ffffffff817f26ad T _raw_read_lock
	ffffffff817f26c6 T _raw_read_lock_bh
	ffffffff817f26ea T _raw_read_lock_irq
	ffffffff817f2704

      That's 684 bytes - a very stark difference that will show up in 
      better I$ footprint even if usage is sparse.

      OTOH, on the flip side, their ordering is far from ideal, so for 
      example the rarely used 'trylock' variants are mixed into the 
      middle, and the way we mix rwlock with spinlock ops isn't 
      very pretty either.

      So we could reduce alignment for just the locking APIs, via per 
      .o cflags in the Makefile, if packing otherwise hurts the common 
      case.

This function packing argument fails:

  - for large functions that are physically fragmented

  - if less than half of all functions in a hot workload are
    packed together. This might be the common case in fact.

  - even if functions are technically 'packed' next to each other, 
    this only works for small functions: larger functions typically 
    are hotter near their heads, with unlikely codepaths being in 
    their tails.

> Size matters, but size matters mainly from an I$ standpoint, not 
> from some absolute 'big is bad" issue.

Absolutely.

> [...] Also, even when size matters, performance matters too. I do 
> want performance numbers. Is this measurable?

Will try to measure this. I'm somewhat sceptical that I'll be able to 
measure any signal: alignment effects are very hard to measure on x86, 
especially on any realistic workload.

In any case, consider this function alignment patch shelved until it's 
properly measured.

Thanks,

	ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ