[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGXu5j+vBURRz2aUafmOY9nJ56Sr-YonvhE8OGJ+6QkOQe5ePQ@mail.gmail.com>
Date: Wed, 27 Feb 2019 11:45:03 -0800
From: Kees Cook <keescook@...omium.org>
To: Solar Designer <solar@...nwall.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Jann Horn <jannh@...gle.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Dominik Brodowski <linux@...inikbrodowski.net>,
Kernel Hardening <kernel-hardening@...ts.openwall.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/3] x86/asm: Pin sensitive CR0 bits
On Wed, Feb 27, 2019 at 2:44 AM Solar Designer <solar@...nwall.com> wrote:
>
> On Tue, Feb 26, 2019 at 03:36:45PM -0800, Kees Cook wrote:
> > static inline void native_write_cr0(unsigned long val)
> > {
> > - asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order));
> > + bool warn = false;
> > +
> > +again:
> > + val |= X86_CR0_WP;
> > + /*
> > + * In order to have the compiler not optimize away the check
> > + * in the WARN_ONCE(), mark "val" as being also an output ("+r")
>
> This comment is now slightly out of date: the check is no longer "in the
> WARN_ONCE()". Ditto about the comment for CR4.
Ah yes, good point. I will adjust and send a v2 series.
>
> > + * by this asm() block so it will perform an explicit check, as
> > + * if it were "volatile".
> > + */
> > + asm volatile("mov %0,%%cr0": "+r" (val) : "m" (__force_order) : );
> > + /*
> > + * If the MOV above was used directly as a ROP gadget we can
> > + * notice the lack of pinned bits in "val" and start the function
> > + * from the beginning to gain the WP bit for sure. And do it
> > + * without first taking the exception for a WARN().
> > + */
> > + if ((val & X86_CR0_WP) != X86_CR0_WP) {
> > + warn = true;
> > + goto again;
> > + }
> > + WARN_ONCE(warn, "Attempt to unpin X86_CR0_WP, cr0 bypass attack?!\n");
> > }
>
> Alexander
--
Kees Cook
Powered by blists - more mailing lists