lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 26 Oct 2017 23:12:09 +0200
From:   Linus Torvalds <>
To:     Craig Bergstrom <>
Cc:     Ingo Molnar <>,
        Sander Eikelenboom <>,
        Boris Ostrovsky <>,
        Fengguang Wu <>,,
        Linux Kernel Mailing List <>,
        LKP <>
Subject: Re: ce56a86e2a ("x86/mm: Limit mmap() of /dev/mem to valid physical
 addresses"): kernel BUG at arch/x86/mm/physaddr.c:79!

On Thu, Oct 26, 2017 at 9:50 PM, Craig Bergstrom <> wrote:
> Reverting seems like the right approach at the moment.  My apologies
> for the breakage so late the in the cycle.
> Post-revert, there remains a bug here wherein you can make the system
> OOPS if you mmap memory above the 48 bit bus width.  Linus/Ingo, is
> there something in particular that you'd like to see before pulling in
> a check on the bus width (or some other fix)?

I think we might also look at just also handling the whole RSVD page
fault case more gracefully.

The fact that we consider it a page table corruption and react in a
fairly extreme manner to it sounds a bit excessive. Yes, that RSVD it
clearly happens if we have page table corruption, but if we can
trigger it other ways, we shouldn't necessarily just assume blindly
that it's due to some horrible corruption.

So perhaps instead of trying to limit the mmap, see this as a testing
opportunity for unusual fault handling, and not consider RSVD a
"corrupted page tables" issue, but just treat it as a normal page
fault (perhaps after validating that it's a /dev/mem mapping).

I'll need to go to bed now (in order to wake up again in a couple of
hours to go to the airport to get back home from the kernel summit),
but this doesn't sound fundamentally nasty. And handling it at fault
time avoids the whole worry about "what if we got the physical memory
size wrong for some CPU or virtualization environment" issue.


Powered by blists - more mailing lists