lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 29 Nov 2017 15:42:17 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     linux-api@...r.kernel.org
Cc:     Khalid Aziz <khalid.aziz@...cle.com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Russell King - ARM Linux <linux@...linux.org.uk>,
        Andrea Arcangeli <aarcange@...hat.com>, <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>, linux-arch@...r.kernel.org,
        Florian Weimer <fweimer@...hat.com>,
        John Hubbard <jhubbard@...dia.com>,
        Abdul Haleem <abdhalee@...ux.vnet.ibm.com>,
        Joel Stanley <joel@....id.au>,
        Kees Cook <keescook@...omium.org>,
        Michal Hocko <mhocko@...e.com>
Subject: [PATCH 0/2] mm: introduce MAP_FIXED_SAFE

Hi,
I am resending with RFC dropped and ask for inclusion. There haven't
been any fundamental objections for the RFC [1]. I have also prepared
a man page patch which is 0/3 of this series.

This has started as a follow up discussion [2][3] resulting in the
runtime failure caused by hardening patch [4] which removes MAP_FIXED
from the elf loader because MAP_FIXED is inherently dangerous as it
might silently clobber an existing underlying mapping (e.g. stack). The
reason for the failure is that some architectures enforce an alignment
for the given address hint without MAP_FIXED used (e.g. for shared or
file backed mappings).

One way around this would be excluding those archs which do alignment
tricks from the hardening [5]. The patch is really trivial but it has
been objected, rightfully so, that this screams for a more generic
solution. We basically want a non-destructive MAP_FIXED.

The first patch introduced MAP_FIXED_SAFE which enforces the given
address but unlike MAP_FIXED it fails with ENOMEM if the given range
conflicts with an existing one. The flag is introduced as a completely
new one rather than a MAP_FIXED extension because of the backward
compatibility. We really want a never-clobber semantic even on older
kernels which do not recognize the flag. Unfortunately mmap sucks wrt.
flags evaluation because we do not EINVAL on unknown flags. On those
kernels we would simply use the traditional hint based semantic so the
caller can still get a different address (which sucks) but at least not
silently corrupt an existing mapping. I do not see a good way around
that. Except we won't export expose the new semantic to the userspace at
all. 

It seems there are users who would like to have something like that.
Jemalloc has been mentioned by Michael Ellerman [6]

Florian Weimer has mentioned the following:
: glibc ld.so currently maps DSOs without hints.  This means that the kernel
: will map right next to each other, and the offsets between them a completely
: predictable.  We would like to change that and supply a random address in a
: window of the address space.  If there is a conflict, we do not want the
: kernel to pick a non-random address. Instead, we would try again with a
: random address.

John Hubbard has mentioned CUDA example
: a) Searches /proc/<pid>/maps for a "suitable" region of available
: VA space.  "Suitable" generally means it has to have a base address
: within a certain limited range (a particular device model might
: have odd limitations, for example), it has to be large enough, and
: alignment has to be large enough (again, various devices may have
: constraints that lead us to do this).
: 
: This is of course subject to races with other threads in the process.
: 
: Let's say it finds a region starting at va.
: 
: b) Next it does: 
:     p = mmap(va, ...) 
: 
: *without* setting MAP_FIXED, of course (so va is just a hint), to
: attempt to safely reserve that region. If p != va, then in most cases,
: this is a failure (almost certainly due to another thread getting a
: mapping from that region before we did), and so this layer now has to
: call munmap(), before returning a "failure: retry" to upper layers.
: 
:     IMPROVEMENT: --> if instead, we could call this:
: 
:             p = mmap(va, ... MAP_FIXED_SAFE ...)
: 
:         , then we could skip the munmap() call upon failure. This
:         is a small thing, but it is useful here. (Thanks to Piotr
:         Jaroszynski and Mark Hairgrove for helping me get that detail
:         exactly right, btw.)
: 
: c) After that, CUDA suballocates from p, via: 
:  
:      q = mmap(sub_region_start, ... MAP_FIXED ...)
: 
: Interestingly enough, "freeing" is also done via MAP_FIXED, and
: setting PROT_NONE to the subregion. Anyway, I just included (c) for
: general interest.

Atomic address range probing in the multithreaded programs in general
sounds like an interesting thing to me.

The second patch simply replaces MAP_FIXED use in elf loader by
MAP_FIXED_SAFE. I believe other places which rely on MAP_FIXED should
follow. Actually real MAP_FIXED usages should be docummented properly
and they should be more of an exception.

Does anybody see any fundamental reasons why this is a wrong approach?

Diffstat says
 arch/alpha/include/uapi/asm/mman.h   |  2 ++
 arch/metag/kernel/process.c          |  6 +++++-
 arch/mips/include/uapi/asm/mman.h    |  2 ++
 arch/parisc/include/uapi/asm/mman.h  |  2 ++
 arch/powerpc/include/uapi/asm/mman.h |  1 +
 arch/sparc/include/uapi/asm/mman.h   |  1 +
 arch/tile/include/uapi/asm/mman.h    |  1 +
 arch/xtensa/include/uapi/asm/mman.h  |  2 ++
 fs/binfmt_elf.c                      | 12 ++++++++----
 include/uapi/asm-generic/mman.h      |  1 +
 mm/mmap.c                            | 11 +++++++++++
 11 files changed, 36 insertions(+), 5 deletions(-)

[1] http://lkml.kernel.org/r/20171116101900.13621-1-mhocko@kernel.org
[2] http://lkml.kernel.org/r/20171107162217.382cd754@canb.auug.org.au
[3] http://lkml.kernel.org/r/1510048229.12079.7.camel@abdul.in.ibm.com
[4] http://lkml.kernel.org/r/20171023082608.6167-1-mhocko@kernel.org
[5] http://lkml.kernel.org/r/20171113094203.aofz2e7kueitk55y@dhcp22.suse.cz
[6] http://lkml.kernel.org/r/87efp1w7vy.fsf@concordia.ellerman.id.au


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ