lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190911200835.GD7834@swahl-linux>
Date:   Wed, 11 Sep 2019 15:08:35 -0500
From:   Steve Wahl <steve.wahl@....com>
To:     "Kirill A. Shutemov" <kirill@...temov.name>
Cc:     Steve Wahl <steve.wahl@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
        Juergen Gross <jgross@...e.com>,
        Brijesh Singh <brijesh.singh@....com>,
        Jordan Borgner <mail@...dan-borgner.de>,
        Feng Tang <feng.tang@...el.com>, linux-kernel@...r.kernel.org,
        Baoquan He <bhe@...hat.com>, russ.anderson@....com,
        dimitri.sivanich@....com, mike.travis@....com
Subject: Re: [PATCH] x86/boot/64: Make level2_kernel_pgt pages invalid
 outside kernel area.

On Wed, Sep 11, 2019 at 03:28:56AM +0300, Kirill A. Shutemov wrote:
> On Tue, Sep 10, 2019 at 09:28:10AM -0500, Steve Wahl wrote:
> > On Mon, Sep 09, 2019 at 11:14:14AM +0300, Kirill A. Shutemov wrote:
> > > On Fri, Sep 06, 2019 at 04:29:50PM -0500, Steve Wahl wrote:
> > > > ...
> > > > The answer is to invalidate the pages of this table outside the
> > > > address range occupied by the kernel before the page table is
> > > > activated.  This patch has been validated to fix this problem on our
> > > > hardware.
> > > 
> > > If the goal is to avoid *any* mapping of the reserved region to stop
> > > speculation, I don't think this patch will do the job. We still (likely)
> > > have the same memory mapped as part of the identity mapping. And it
> > > happens at least in two places: here and before on decompression stage.
> > 
> > I imagine you are likely correct, ideally you would not map any
> > reserved pages in these spaces.
> > 
> > I've been reading the code to try to understand what you say above.
> > For identity mappings in the kernel, I see level2_ident_pgt mapping
> > the first 1G.
> 
> This is for XEN case. Not sure how relevant it is for you.

I don't have much familiarity with XEN, and I'm not using it, but it
does seem to be enabled for the distribution kernels we deal with.
However, it is below 4G.

> > And I see early_dyanmic_pgts being set up with an identity mapping of
> > the kernel that seems to be pretty well restricted to the range _text
> > through _end.
> 
> Right, but rounded to 2M around the place kernel was decompressed to.
> Some of reserved areas from the listing below are smaller then 2M or not
> aligned to 2M.

The problematic reserved regions are aligned to 2M or greater.  See
the answer to "which reserved regions" below.

> > Within the decompression code, I see an identity mapping of the first
> > 4G set up within the 32 bit code.  I believe we go past that to the
> > startup_64 entry point.  (I don't know how common that path is, but I
> > don't have a way to test it without figuring out how to force it.)
> 
> Kernel can start in 64-bit mode directly and in this case we inherit page
> tables from bootloader/BIOS. They trusted to provide identity mapping to
> cover at least kernel (plus some more essential stuff), but it's free to
> map more.

I haven't looked at the bootloader, at least not yet.

If tables supplied by the BIOS don't follow the rules, that is
somebody else's problem.  (And if needed I'll hunt them down.)

> > From a pragmatic standpoint, the guy who can verify this for me is on
> > vacation, but I believe our BIOS won't ever place the halt-causing
> > ranges in a space below 4GiB.  Which explains why this patch works for
> > our hardware.  (We do have reserved regions below 4G, just not the
> > ones that hardware causes a halt for accessing.)
> > 
> > In case it helps you picture the situation, our hardware takes a small
> > portion of RAM from the end of each NUMA node (or it might be pairs or
> > quads of NUMA nodes, I'm not entirely clear on this at the moment) for
> > its own purposes.  Here's a section of our e820 table:
> > 
> > [    0.000000] BIOS-e820: [mem 0x000000007c000000-0x000000008fffffff] reserved
> > [    0.000000] BIOS-e820: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
> > [    0.000000] BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
> > [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000002f7fffffff] usable
> > [    0.000000] BIOS-e820: [mem 0x0000002f80000000-0x000000303fffffff] reserved *
> > [    0.000000] BIOS-e820: [mem 0x0000003040000000-0x0000005f7bffffff] usable
> > [    0.000000] BIOS-e820: [mem 0x0000005f7c000000-0x000000603fffffff] reserved *
> > [    0.000000] BIOS-e820: [mem 0x0000006040000000-0x0000008f7bffffff] usable
> > [    0.000000] BIOS-e820: [mem 0x0000008f7c000000-0x000000903fffffff] reserved *
> > [    0.000000] BIOS-e820: [mem 0x0000009040000000-0x000000bf7bffffff] usable
> > [    0.000000] BIOS-e820: [mem 0x000000bf7c000000-0x000000c03fffffff] reserved *
> > [    0.000000] BIOS-e820: [mem 0x000000c040000000-0x000000ef7bffffff] usable
> > [    0.000000] BIOS-e820: [mem 0x000000ef7c000000-0x000000f03fffffff] reserved *
> > [    0.000000] BIOS-e820: [mem 0x000000f040000000-0x0000011f7bffffff] usable
> > [    0.000000] BIOS-e820: [mem 0x0000011f7c000000-0x000001203fffffff] reserved *
> > [    0.000000] BIOS-e820: [mem 0x0000012040000000-0x0000014f7bffffff] usable
> > [    0.000000] BIOS-e820: [mem 0x0000014f7c000000-0x000001503fffffff] reserved *
> > [    0.000000] BIOS-e820: [mem 0x0000015040000000-0x0000017f7bffffff] usable
> > [    0.000000] BIOS-e820: [mem 0x0000017f7c000000-0x000001803fffffff] reserved *
> 
> It would be interesting to know which of them are problematic.

It's pretty much all of the reserved regions above 4G, I edited the
table above and put asterisks on the lines describing the problematic
regions.  The number and size of them will vary based on the number of
NUMA nodes and other factors.

The alignment of these... While examining the values above, I realized
I'm working with a BIOS version that has already tried to work around
this problem by aligning to 1GiB.  My expert is still on vacation, but
I and a coworker looked at the BIOS source, and we're 99% certain the
alignment and granularity without the workaround code would be 64MiB.
Which accomodates the 2MiB alignment issues discussed above.

I didn't want to delay my response until my expert was back.  I will
send another message when I can get this confirmed.

> > Our problem occurs when KASLR (or kexec) places the kernel close
> > enough to the end of one of the usable sections, and the 1G of 1:1
> > mapped space includes a portion of the following reserved section, and
> > speculation touches the reserved area.
> 
> Are you sure that it's speculative access to blame? Speculative access
> must not cause change in architectural state.

That's a hard one to prove.  However, our hardware stops detecting
accesses to these areas (and halting) when we stop including them in
valid page table entries.  With that change, any non-speculative
accesses to these areas should have transformed into an access to an
invalid page / page fault in kernel mode / oops.  Instead, they just
go away.  That says to me that they are speculative accesses.

Thank you for your time looking into this with me!

--> Steve Wahl

-- 
Steve Wahl, Hewlett Packard Enterprise

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ