lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191126171421.GB28423@gmail.com>
Date:   Tue, 26 Nov 2019 18:14:21 +0100
From:   Ingo Molnar <mingo@...nel.org>
To:     Joerg Roedel <jroedel@...e.de>
Cc:     Joerg Roedel <joro@...tes.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Andy Lutomirski <luto@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Peter Zijlstra <peterz@...radead.org>, hpa@...or.com,
        x86@...nel.org, linux-kernel@...r.kernel.org,
        stable@...r.kernel.org
Subject: Re: [PATCH -tip, v2] x86/mm/32: Sync only to VMALLOC_END in
 vmalloc_sync_all()


* Joerg Roedel <jroedel@...e.de> wrote:

> Hi Ingo,
> 
> On Tue, Nov 26, 2019 at 12:11:19PM +0100, Ingo Molnar wrote:
> > The vmalloc_sync_all() also iterating over the LDT range is buggy, 
> > because for the LDT the mappings are *intentionally* and fundamentally 
> > different between processes, i.e. not synchronized.
> 
> Yes, you are right, your patch description is much better, thanks for
> making it more clear and correct.
> 
> > Furthermore I'm not sure we need to iterate over the PKMAP range either: 
> > those are effectively permanent PMDs as well, and they are not part of 
> > the vmalloc.c lazy deallocation scheme in any case - they are handled 
> > entirely separately in mm/highmem.c et al.
> 
> I looked a bit at that, and I didn't find an explict place where the
> PKMAP PMD gets established. It probably happens implicitly on the first
> kmap() call, so we are safe as long as the first call to kmap happens
> before the kernel starts the first userspace process.

No, it happens during early boot, in permanent_kmaps_init():

        vaddr = PKMAP_BASE;
        page_table_range_init(vaddr, vaddr + PAGE_SIZE*LAST_PKMAP, pgd_base);

That page_table_range_init() will go from PKMAP_BASE to the last PKMAP, 
which on PAE kernels is typically 0xff600000...0xff800000, 2MB in size, 
taking up exactly one PMD entry.

This single pagetable page, covering 2MB of virtual memory via 4K 
entries, gets passed on to the mm/highmem.c code via:

        pkmap_page_table = pte;

The pkmap_page_table is mapped early on into init_mm, every task started 
after that with a new pgd inherits it, and the pmd entry never changes, 
so there's nothing to synchronize.

The pte entries within this single pagetable page do change frequently 
according to the kmap() code, but since the pagetable page is shared 
between all tasks and the TLB flushes are SMP safe, it's all synchronized 
by only modifying pkmap_page_table, as it should.

> But that is not an issue that should be handled by vmalloc_sync_all(), 
> as the name already implies that it only cares about the vmalloc range.

Well, hypothetically it could *accidentally* have some essentially effect 
on bootstrapping the PKMAP pagetables - I don't think that's so, based on 
the reading of the code, but only testing will tell for sure.

> So your change to only iterate to VMALLOC_END makes sense and we should 
> establish the PKMAP PMD at a defined place to make sure it exists when 
> we start the first process.

I believe that's done in permanent_kmaps_init().

> > Note that this is *completely* untested - I might have wrecked PKMAP in 
> > my ignorance. Mind giving it a careful review and a test?
> 
> My testing environment for 32 bit is quite limited these days, but I 
> tested it in my PTI-x32 environment and the patch below works perfectly 
> fine there and still fixes the ldt_gdt selftest.

Cool, thanks! I'll apply it with your Tested-by.

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ