lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4f43d4d4-832d-37bc-be7f-da0da735bbec@intel.com>
Date:   Tue, 23 Apr 2019 09:04:45 -0700
From:   Dave Hansen <dave.hansen@...el.com>
To:     Laurent Dufour <ldufour@...ux.vnet.ibm.com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Thomas Gleixner <tglx@...utronix.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>
Cc:     LKML <linux-kernel@...r.kernel.org>, rguenther@...e.de,
        mhocko@...e.com, vbabka@...e.cz, luto@...capital.net,
        x86@...nel.org, Andrew Morton <akpm@...ux-foundation.org>,
        linux-mm@...ck.org, stable@...r.kernel.org
Subject: Re: [PATCH] x86/mpx: fix recursive munmap() corruption

On 4/23/19 4:16 AM, Laurent Dufour wrote:
> My only concern is the error path.
> Calling arch_unmap() before handling any error case means that it will
> have to be undo and there is no way to do so.

Is there a practical scenario where munmap() of the VDSO can split a
VMA?  If the VDSO is guaranteed to be a single page, it would have to be
a scenario where munmap() was called on a range that included the VDSO
*and* other VMA that we failed to split.

But, the scenario would have to be that someone tried to munmap() the
VDSO and something adjacent, the munmap() failed, and they kept on using
the VDSO and expected the special signal and perf behavior to be maintained.

BTW, what keeps the VDSO from merging with an adjacent VMA?  Is it just
the vm_ops->close that comes from special_mapping_vmops?

> I don't know what is the rational to move arch_unmap() to the beginning
> of __do_munmap() but the error paths must be managed.

It's in the changelog:

	https://patchwork.kernel.org/patch/10909727/

But, the tl;dr version is: x86 is recursively calling __do_unmap() (via
arch_unmap()) in a spot where the internal rbtree data is inconsistent,
which causes all kinds of fun.  If we move arch_unmap() to before
__do_munmap() does any data structure manipulation, the recursive call
doesn't get confused any more.

> There are 2 assumptions here:
>  1. 'start' and 'end' are page aligned (this is guaranteed by __do_munmap().
>  2. the VDSO is 1 page (this is guaranteed by the union vdso_data_store on powerpc)

Are you sure about #2?  The 'vdso64_pages' variable seems rather
unnecessary if the VDSO is only 1 page. ;)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ