lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wh6=X2TauxJzEBkLwcq5wcV3zM53FLDiVjD4LiBRcD9Qg@mail.gmail.com>
Date:   Mon, 11 Mar 2019 10:41:33 -0700
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Kees Cook <keescook@...omium.org>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
        "the arch/x86 maintainers" <x86@...nel.org>
Subject: Re: [GIT pull] x86/asm for 5.1

On Mon, Mar 11, 2019 at 9:55 AM Kees Cook <keescook@...omium.org> wrote:
>
> Clarification, just so I get the design considerations adjusted
> correctly... I did this with a global because of the observation that
> once CPU setup is done, the pin mask is the same for all CPUs.

Yes. Once we're booted, it's all the same.

It's only during bootup that it has this issue.

> However, yes, I see the point about the chosen implementation
> resulting in a potential timing problem, etc. What about enabling the
> pin mask once CPU init is finished? The goal is to protect those bits
> during runtime (and to stay out of the way at init time).

Yes, that would fix things too, although I suspect you'll find that it
gets hairy for hotplug CPU's (including suspend/resume), because new
CPU's keep booting "forever".

One - and perhaps better - option is to really force the %cr4 bits for
all CPU's *very* early. We actually have some of that logic for %cr4
already, because %cr4 absolutely needs to enable things like PAE and
PGE very early it even enables paging, because those bits are already
set in the base kernel page tables that it loads.

At that point, the SMEP/SMAP/UMIP bits would *really* be set globally
after the first CPU boots.

Actually, I think it might be less important to initialize %cr4 itself
early, and more important to initialize 'cpu_tlbstate.cr4', but right
now that actually gets initialized from the early %cr4 value (see
'cr4_init_shadow()')

So what might be workable is to make 'cr4_init_shadow()' itself just
do something like

        this_cpu_write(cpu_tlbstate.cr4, __read_cr4() | cr4_pin);

but I did *not* check whether there might be some other random
accesses to %cr4 in various legacy paths..

The "use a percpu pinning mask" seemed to be the simplest solution,
but the above might work fairly easily too.

Hmm?

                  Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ