[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5321F72E.7030109@zytor.com>
Date: Thu, 13 Mar 2014 11:21:34 -0700
From: "H. Peter Anvin" <hpa@...or.com>
To: Andy Lutomirski <luto@...capital.net>
CC: Stefani Seibold <stefani@...bold.net>,
Greg KH <gregkh@...uxfoundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
X86 ML <x86@...nel.org>, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Andi Kleen <ak@...ux.intel.com>,
Andrea Arcangeli <aarcange@...hat.com>,
John Stultz <john.stultz@...aro.org>,
Pavel Emelyanov <xemul@...allels.com>,
Cyrill Gorcunov <gorcunov@...nvz.org>,
andriy.shevchenko@...ux.intel.com,
Martin Runge <Martin.Runge@...de-schwarz.com>,
Andreas Brief <Andreas.Brief@...de-schwarz.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 3/3] x86, vdso32: handle 32 bit vDSO larger one page
On 03/13/2014 11:08 AM, H. Peter Anvin wrote:
> On 03/13/2014 10:28 AM, Andy Lutomirski wrote:
>>
>> Does this mean you prefer the relocation approach to the compat vdso
>> removal approach? It seems like Linus is okay with either one.
>>
>
> Actually, thinking about it, removing it is probably better:
>
> a) gets rid of legacy code, making room for unification;
> b) either way enabling compat support (either relocation or disabling
> the vdso) has a performance penalty for *all* processes.
>
> The only way to avoid that is to have a vdso at a fixed addresses across
> all processes, either in the fixmap or in the user area (presumably at
> the very top.)
>
So going back and re-reading all the threads, the consensus was to
remove the compat vdso, but recycling the CONFIG_COMPAT_VDSO
configuration option name for the default-disable option.
It is important that anyone who actually cares about performance unsets
the option.
-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists