lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wjmZzz6b_9iBGp+3Nysb0A6_3VatmUdr_ArgyqHq0KMcA@mail.gmail.com>
Date:   Mon, 15 Jun 2020 12:48:39 -0700
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Shuah Khan <skhan@...uxfoundation.org>,
        Joerg Roedel <jroedel@...e.de>,
        Andy Lutomirski <luto@...nel.org>,
        "Peter Zijlstra (Intel)" <peterz@...radead.org>
Cc:     Takashi Iwai <tiwai@...e.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "the arch/x86 maintainers" <x86@...nel.org>
Subject: Re: Linux 5.8-rc1 BUG unable to handle page fault (snd_pcm)

On Mon, Jun 15, 2020 at 11:48 AM Shuah Khan <skhan@...uxfoundation.org> wrote:
>
> I am seeing the following problem on my system. I haven't started debug
> yet. Is this a known issue?
>
> [    9.791309] BUG: unable to handle page fault for address:
> ffffb1e78165d000
> [    9.791328] #PF: supervisor write access in kernel mode
> [    9.791330] #PF: error_code(0x000b) - reserved bit violation

Hmm. "reserved bit violation" sounds like the page tables themselves
are corrupt.

> [    9.791332] PGD 23dd5c067 P4D 23dd5c067 PUD 23dd5d067 PMD 22ba8e067
> PTE 80001a3681509163

PTE low 12 bits 163 is "global", "dirty+accessed" + "kernel
read-write", so that part looks fine. The top bit is NX. I'm not
seeing any reserved bits set.

The page directory bits look sane too (067 is just the normal state
for page tables).

The PTE does have bit 44 set. I think that's what triggers the
problem. This is presumably on a machine with 44 physical address
bits?

The faulting code is all in memset, and it's just doing "rep stosq" to
fill memory with zeroes, and we have

    RAX: 0000000000000000 (the zero pattern)
    RCX: 00000000000008a0 (repeat count)
    RDI: ffffb1e78165d000 (the target address)

and that target address looks odd. If I read it right, it's at the
41TB mark in the direct-mapped area.

But I am probably mis-reading this.

Better bring in a few more x86 people. We did have some page table
work this time around, with both the entry code changes but also the
vmalloc faulting removal.

It doesn't _look_ like it's in the vmalloc range, though. But with
that RCX value, it's certainly doing more than a single page.

> [    9.791367] Call Trace:
> [    9.791377]  ? snd_pcm_hw_params+0x3ca/0x440 [snd_pcm]
> [    9.791383]  snd_pcm_common_ioctl+0x173/0xf20 [snd_pcm]
> [    9.791389]  ? snd_ctl_ioctl+0x1c5/0x710 [snd]
> [    9.791394]  snd_pcm_ioctl+0x27/0x40 [snd_pcm]
> [    9.791398]  ksys_ioctl+0x9d/0xd0
> [    9.791400]  __x64_sys_ioctl+0x1a/0x20
> [    9.791404]  do_syscall_64+0x49/0xc0
> [    9.791406]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

Can you re-create it with CONFIG_DEBUG_INFO enabled, and run it
through scripts/decode_stacktrace.sh to give more details on where it
happens.

              Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ