[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1105172354040.3078@ionos>
Date: Wed, 18 May 2011 00:59:59 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Andi Kleen <andi@...stfloor.org>
cc: Andy Lutomirski <luto@....EDU>, Ingo Molnar <mingo@...e.hu>,
x86@...nel.org, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <eric.dumazet@...il.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Borislav Petkov <bp@...64.org>
Subject: Re: [PATCH v4 0/6] Micro-optimize vclock_gettime
On Tue, 17 May 2011, Andi Kleen wrote:
> Andy Lutomirski <luto@....EDU> writes:
> >
> > On KVM on Sandy Bridge, I can emulate a vsyscall that does nothing in 400ns or so. I'll try to make this code emulate real vsyscalls over the weekend. This was much easier than I expected.
>
> How about the performance of all the statically linked programs? I guess
_ALL_ the statically linked programs? Point out a single one which
matters and _IS_ performance critical.
> you just declared they don't matter? gettimeofday is quite critical
> and adding a exception into it is just a performance desaster.
>
> Also it's always a dangerous assumption to think that all
> programs on Linux use glibc ("all world is a Vax")
>
> In fact more and more of Linux users are using different libcs these
> days (like Android or embedded systems or languages with special runtime
> systems) Who knows if all those other libraries use vDSO?
Which is completely irrelevant to x86_64. Point to a single relevant
x86_64 embedded system to which one of the above handwaving applies.
> And then there are of course the old glibcs. A lot of people
> (including me) use new kernels with old userland.
And how is your use case performance critical ?
Furthermore any halfways up to date deployemnt is using VDSO for
obvious reasons and the archaic stuff which might be affected is not
using a recent kernel at all (except for akpm on his retro laptop, but
that "performance penalty" is probably the least of his worries).
> For me this seems like a very risky move -- breaking performance of
> previously perfectly good set ups for very little reason.
>
> Given the old vsyscall code is somewhat ugly -- I wouldn't argue that --
It's not somewhat ugly. It's a design failure and more important it's
a risk - and you very well know that.
> but compatibility (including performance compatibility) has always
> been importand in Linux and we have far uglier code around in the
> name of compatibility..
We have ugly code around, but it has always been more important to fix
security risks than keeping them around for performance sake.
> And the "security problem" you keep talking about can be fixed much
> easier and more compatible as I pointed out.
Unless you come forth with a patch which addresses _ALL_ reasons which
caused me to remove your superior "Make the vsyscall less risky"
approach, just admit that it's fcked up by design and be done with it.
Stop your "as I pointed out" handwaving please. That code was broken
and even exploitable and I removed it for more than one very good
reason. Stop claiming that your "design" is in any way fixable. It's
not.
> As far as I'm concered the change is a bad idea.
As far as I'm concerned you seem to regain your old habits of
defending your proven to be wrong "design" decisions no matter what.
And as long as you can't come up with documented proof that a single
application is affected by such a change, STFU!
Even if you are not able to proof it, it's a total nobrainer to revert
such a change and make it CONFIG_KEEP_FCKUP=y dependent in the very
unliklely case that we get a proper and reasonable regression report.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists