[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <84144f020811171325m5eabca71ge525ea643dbe8209@mail.gmail.com>
Date: Mon, 17 Nov 2008 23:25:30 +0200
From: "Pekka Enberg" <penberg@...helsinki.fi>
To: "Linus Torvalds" <torvalds@...ux-foundation.org>
Cc: "Steven Rostedt" <rostedt@...dmis.org>,
LKML <linux-kernel@...r.kernel.org>,
"Paul Mackerras" <paulus@...ba.org>,
"Benjamin Herrenschmidt" <benh@...nel.crashing.org>,
linuxppc-dev@...abs.org,
"Andrew Morton" <akpm@...ux-foundation.org>,
"Ingo Molnar" <mingo@...e.hu>,
"Thomas Gleixner" <tglx@...utronix.de>
Subject: Re: Large stack usage in fs code (especially for PPC64)
On Mon, Nov 17, 2008 at 11:18 PM, Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
> I do wonder just _what_ it is that causes the stack frames to be so
> horrid. For example, you have
>
> 18) 8896 160 .kmem_cache_alloc+0xfc/0x140
>
> and I'm looking at my x86-64 compile, and it has a stack frame of just 8
> bytes (!) for local variables plus the save/restore area (which looks like
> three registers plus frame pointer plus return address). IOW, if I'm
> looking at the code right (so big caveat: I did _not_ do a real stack
> dump!) the x86-64 stack cost for that same function is on the order of 48
> bytes. Not 160.
>
> Where does that factor-of-three+ difference come from? From the numbers, I
> suspect ppc64 has a 32-byte stack alignment, which may be part of it, and
> I guess the compiler is more eager to use all those extra registers and
> will happily have many more callee-saved regs that are actually used.
>
> But that still a _lot_ of extra stack.
>
> Of course, you may have things like spinlock debugging etc enabled. Some
> of our debugging options do tend to blow things up.
Note that kmem_cache_alloc() is likely to contain lots of inlined
functions for both SLAB and SLUB. Perhaps that blows up stack usage on
ppc?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists