[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACT4Y+asySd2syQ7itPsC6Uj6xLUfFjCt5OAMadLNZOvbt9ibQ@mail.gmail.com>
Date: Tue, 19 Dec 2017 12:48:41 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Andy Lutomirski <luto@...nel.org>,
Wanpeng Li <kernellwp@...il.com>,
David Hildenbrand <david@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
syzbot
<bot+1f445b1009b8eeededa30fe62ccf685f2ec9d155@...kaller.appspotmail.com>,
Borislav Petkov <bp@...e.de>,
Dmitry Safonov <dsafonov@...tuozzo.com>,
Peter Anvin <hpa@...or.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Kyle Huey <me@...ehuey.com>, Ingo Molnar <mingo@...hat.com>,
syzkaller-bugs@...glegroups.com,
"the arch/x86 maintainers" <x86@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
KVM list <kvm@...r.kernel.org>,
"Lan, Tianyu" <tianyu.lan@...el.com>,
James Mattson <jmattson@...gle.com>
Subject: Re: BUG: unable to handle kernel paging request in __switch_to
On Fri, Dec 15, 2017 at 5:44 PM, Ingo Molnar <mingo@...nel.org> wrote:
>
> * Andy Lutomirski <luto@...nel.org> wrote:
>
>> On Fri, Dec 15, 2017 at 2:02 AM, Dmitry Vyukov <dvyukov@...gle.com> wrote:
>> > On Fri, Dec 15, 2017 at 10:58 AM, Wanpeng Li <kernellwp@...il.com> wrote:
>> >> 2017-12-15 17:51 GMT+08:00 David Hildenbrand <david@...hat.com>:
>> >>>
>> >>>> int main()
>> >>>> {
>> >>>> int fd = open("/dev/kvm", 0x80102ul);
>> >>>> int vm = ioctl(fd, KVM_CREATE_VM, 0);
>> >>>> int cpu = ioctl(vm, KVM_CREATE_VCPU, 4);
>> >>>
>> >>> Not even a memory region :) So maybe the first memory access directly
>> >>> triggers a fault?
>> >>>
>> >>>> ioctl(cpu, KVM_RUN, 0);
>> >>>> return 0;
>> >>>> }
>> >>>>
>> >>>> And, yes, this in fact triggers instant reboot of kernel (running in qemu).
>> >>>> Am I missing something here?
>> >>>>
>> >>>> +kvm maintainers, you can see full thread here:
>> >>>> https://groups.google.com/forum/#!topic/syzkaller-bugs/_oveOKGm3jw
>> >>
>> >> I didn't see any issue after running the test.
>> >
>> > Yes, it's strange. But I can reproduce it. There must be something
>> > different in our setups.
>> > Here is how to build exact same kernel:
>> > https://groups.google.com/d/msg/syzkaller-bugs/_oveOKGm3jw/vc1tXvsbCgAJ
>> >
>> > Here is how I start qemu:
>> >
>> > qemu-system-x86_64 -hda wheezy.img -net
>> > user,host=10.0.2.10,hostfwd=tcp::10022-:22 -net nic -nographic -kernel
>> > arch/x86/boot/bzImage -append "kvm-intel.nested=1
>> > kvm-intel.unrestricted_guest=1 kvm-intel.ept=1
>> > kvm-intel.flexpriority=1 kvm-intel.vpid=1
>> > kvm-intel.emulate_invalid_guest_state=1 kvm-intel.eptad=1
>> > kvm-intel.enable_shadow_vmcs=1 kvm-intel.pml=1
>> > kvm-intel.enable_apicv=1 console=ttyS0 root=/dev/sda
>> > earlyprintk=serial slub_debug=UZ vsyscall=native rodata=n oops=panic
>> > panic_on_warn=1 panic=86400" -enable-kvm -pidfile vm_pid -m 2G -smp 4
>> > -cpu host -usb -usbdevice mouse -usbdevice tablet -soundhw all
>> >
>> > The image is here:
>> > https://github.com/google/syzkaller/blob/master/docs/syzbot.md#crash-does-not-reproduce
>> >
>> > Host cpu is Intel(R) Xeon(R) CPU E5-2690 v3
>>
>> Looking more closely, you seem to be testing this:
>>
>> commit d127129e85a020879f334154300ddd3f7ec21c1e (HEAD, tag: next-20171129)
>> Author: Stephen Rothwell <s...@...b.auug.org.au>
>> Date: Wed Nov 29 14:09:56 2017 +1100
>> Add linux-next specific files for 20171129
>>
>> which is almost certainly missing this fix:
>>
>> https://lkml.kernel.org/r/bc7296f4c8d86af71c31a17588c79d89c0890edc.1512109321.git.luto@kernel.org
>>
>> on account of the fix being sent the day after the tag.
>>
>> The symptoms you're seeing are definitely consistent with a screwed up
>> TSS after VM exit.
>
> Note that this should all be fixed in WIP.x86/pti.
>
> If you have:
>
> 5ed1fcd523b9: x86/entry: Fix assumptions that the HW TSS is at the beginning of cpu_tss
>
> then you should be fine.
Let's tell syzbot about the fix:
#syz fix:
x86/entry: Fix assumptions that the HW TSS is at the beginning of cpu_tss
Powered by blists - more mailing lists