[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFwuVYds=ssv755WKBvux1ru6fcDLGi+ORSFyU8xYP7+=w@mail.gmail.com>
Date: Mon, 10 Mar 2014 14:20:34 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: "H. Peter Anvin" <hpa@...ux.intel.com>
Cc: Stefani Seibold <stefani@...bold.net>,
Andy Lutomirski <luto@...capital.net>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andreas Brief <Andreas.Brief@...de-schwarz.com>,
Martin Runge <Martin.Runge@...de-schwarz.com>
Subject: Re: [x86, vdso] BUG: unable to handle kernel paging request at d34bd000
On Mon, Mar 10, 2014 at 1:19 PM, Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
>
> If the only immediate problem is the code generation size, then Andy
> already had a (simpler) hack-around:
>
> #undef CONFIG_OPTIMIZE_INLINING
> #undef CONFIG_X86_PPRO_FENCE
>
> in vclock_gettime.c
Btw, we should seriously consider getting rid of CONFIG_X86_PPRO_FENCE.
It was of questionable value to begin with, and I think that the
actual PPro bug is about one of
- Errata 66, "Delayed line invalidation".
- Errata 92, "Potential loss of data coherency"
both of which affect all PPro versions afaik (there is also a UP
errata 51 wrt ordering of cached and uncached accesses that was fixed
in the sB1 stepping).
And as far as I know, we have never actually seen the bug in real
life, EVEN WHEN PPRO WAS COMMON. The workaround was always based on
knowledge of the errata afaik.
So I do think we might want to consider retiring that config option
entirely as a "historical oddity".
And very much so for the vdso case. Do we even do the asm alternative
fixups for the vdso?
I also suspect we should get rid of CONFIG_X86_OOSTORE, or at least
limit it to !SMP - I don't think anybody ever made SMP systems with
those IDT/Centaur Winchip chips in them.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists