[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090517.152454.91703958.davem@davemloft.net>
Date: Sun, 17 May 2009 15:24:54 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: torvalds@...ux-foundation.org
Cc: hugh@...itas.com, ak@...ux.intel.com, ian.campbell@...rix.com,
jakub@...hat.com, linux-kernel@...r.kernel.org,
jesper.nilsson@...s.com, hannes@...xchg.org, arjan@...ux.intel.com,
akpm@...ux-foundation.org
Subject: Re: [PATCH] Fix print out of function which called WARN_ON()
From: Linus Torvalds <torvalds@...ux-foundation.org>
Date: Sun, 17 May 2009 15:18:19 -0700 (PDT)
> The thing is, on at least x86-64, any function using va_start() will
> allocate something like 64 bytes of stack space for the reg-save area. I'm
> not quite sure _why_ it does that, but it's very irritating, and it showed
> up quite clearly in some of the stackspace usage things.
>
> I even sent the gcc people a patch to fix the worst of it (gcc used to
> allocate about twice as much space because it also had a XMM save area
> even if you compiled without XMM support or something like that), but my
> point is, I'm afraid there is still a noticeable gap on the stack due to
> this, at least for the _fmt() case.
I ran into this issue on sparc64 while helping someone investigate
stack usage there.
There is some strageness wrt. varargs in that it seems that the opaque
object used to reference varargs is effectively an array which
includes first the arguments passed in registers and then the
non-register stack args.
I never got down to the details yet, but on sparc64 currently we
always eat that extra space (in addition to the normal register window
stack space costs) and I had intended to look into eliminating the
varargs incoming argument save slots for cases where we are not doing
any varargs stuff at all.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists