[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5j+NB16S2gAz9TOW6izZykuOXqRoP0nDkTqgn02OJ2ht5Q@mail.gmail.com>
Date: Mon, 11 Mar 2019 09:55:18 -0700
From: Kees Cook <keescook@...omium.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
"the arch/x86 maintainers" <x86@...nel.org>
Subject: Re: [GIT pull] x86/asm for 5.1
On Mon, Mar 11, 2019 at 8:39 AM Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
> They are set early in the boot process, but they are set separately
> for each CPU, and not at the same time.
>
> And that's important. It's important because when the *first* CPU sets
> the "you now need to pin and check the SMAP bit", the _other_ CPU"s
> have not set it yet.
Clarification, just so I get the design considerations adjusted
correctly... I did this with a global because of the observation that
once CPU setup is done, the pin mask is the same for all CPUs.
However, yes, I see the point about the chosen implementation
resulting in a potential timing problem, etc. What about enabling the
pin mask once CPU init is finished? The goal is to protect those bits
during runtime (and to stay out of the way at init time).
I'll play with the "+r" memory output as a way to avoid volatile but
my earlier attempts at that did not produce machine code that was
actually defensive in the face of skipping the "or". (The cr0 case
works; I did use the "+r" method there. It's easy since it's a
hard-coded value -- I had less control over what the compiler decided
to do with register spilling in the cr4 case.)
Anyway, I'll work on a cleaner version and include you on CC.
Thanks!
--
Kees Cook
Powered by blists - more mailing lists