[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.1.10.0804271118500.2896@woody.linux-foundation.org>
Date: Sun, 27 Apr 2008 11:24:28 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Christoph Hellwig <hch@...radead.org>
cc: Sam Ravnborg <sam@...nborg.org>, Adrian Bunk <bunk@...nel.org>,
linux arch <linux-arch@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
David Miller <davem@...emloft.net>
Subject: Re: [PATCH] prepare kconfig inline optimization for all
architectures
On Sun, 27 Apr 2008, Christoph Hellwig wrote:
>
> As Linus mentioned the hint doesn't make any sense because gcc will
> get it wrong anyway. In fact when you look at kernel code it tends
> to inline the everything and the kitchensink as long as there's just
> one caller and this bloat the stack but doesn't inline where it needs
> to. Better don't try to mess with that and do it explicit.
The thing is, the "inline" vs "always_inline" thing _could_ make sense,
but sadly doesn't much.
Part of it is that gcc imnsho inlines too aggressively anyway in the
absense of "inline", so there's no way "inline" can mean "you might
inline" this, because gcc will do that anyway even without it. As a
result, in _practice_ "inline" and "always_inline" end up being very close
to each other - perhaps more so than they should.
I do obviously think that we're right to move into the direction that
"inline" should be a hint. In fact, the biggest issue I have with the new
kconfig option is that I think it should probably be unconditional, but I
suspect that compiler issues and architecture issues make that not be a
good idea.
It will take time before we've sorted out all the fall-out, because I bet
there is still code out there that _should_ use __always_inline, but
doesn't.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists