[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171208094400.wqnezwukq5yx4mgq@gmail.com>
Date: Fri, 8 Dec 2017 10:44:00 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Andy Lutomirski <luto@...capital.net>,
Andy Lutomirski <luto@...nel.org>,
Borislav Petkov <bp@...en8.de>, X86 ML <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Brian Gerst <brgerst@...il.com>,
David Laight <David.Laight@...lab.com>,
Kees Cook <keescook@...omium.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH] LDT improvements
* Thomas Gleixner <tglx@...utronix.de> wrote:
> On Fri, 8 Dec 2017, Ingo Molnar wrote:
> > * Andy Lutomirski <luto@...capital.net> wrote:
> > > I don't love mucking with user address space. I'm also quite nervous about
> > > putting it in our near anything that could pass an access_ok check, since we're
> > > totally screwed if the bad guys can figure out how to write to it.
> >
> > Hm, robustness of the LDT address wrt. access_ok() is a valid concern.
> >
> > Can we have vmas with high addresses, in the vmalloc space for example?
> > IIRC the GPU code has precedents in that area.
> >
> > Since this is x86-64, limitation of the vmalloc() space is not an issue.
> >
> > I like Thomas's solution:
> >
> > - have the LDT in a regular mmap space vma (hence per process ASLR randomized),
> > but with the system bit set.
> >
> > - That would be an advantage even for non-PTI kernels, because mmap() is probably
> > more randomized than kmalloc().
>
> Randomization is pointless as long as you can get the LDT address in user
> space, i.e. w/o UMIP.
But with UMIP unprivileged user-space won't be able to get the linear address of
the LDT. Now it's written out in /proc/self/maps.
> > - It would also be a cleaner approach all around, and would avoid the fixmap
> > complications and the scheduler muckery.
>
> The error code of such an access is always 0x03. So I added a special
> handler, which checks whether the address is in the LDT map range and
> verifies that the access bit in the descriptor is 0. If that's the case it
> sets it and returns. If not, the thing dies. That works.
Are SMP races possible? For example two threads both triggering the accessed bit
fault, but only one of them succeeding in setting it. The other thread should not
die in this case, right?
Thanks,
Ingo
Powered by blists - more mailing lists