[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+Z+FR_Y4CWq7j7MRd9afs4zx0v6NpO_aguMOD2EWxU8Bw@mail.gmail.com>
Date: Sun, 5 Mar 2017 13:36:46 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: James Mattson <jmattson@...gle.com>,
Steve Rutherford <srutherford@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
P J P <ppandit@...hat.com>,
Xiao Guangrong <guangrong.xiao@...ux.intel.com>,
Haozhong Zhang <haozhong.zhang@...el.com>,
Wanpeng Li <kernellwp@...il.com>,
KVM list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Cc: syzkaller <syzkaller@...glegroups.com>
Subject: kvm: use-after-free in vmx_check_nested_events/vmcs12_guest_cr0
Hello,
The following program triggers use-after-free in vmx_check_nested_events:
https://gist.githubusercontent.com/dvyukov/30d798b75411474f29bc7dc203a7e5f0/raw/e1613e010ea88f20ee7a28fc44e8dd5861b0c048/gistfile1.txt
BUG: KASAN: use-after-free in nested_cpu_has_preemption_timer
arch/x86/kvm/vmx.c:1347 [inline] at addr ffff880063b62f68
BUG: KASAN: use-after-free in vmx_check_nested_events+0x6ab/0x720
arch/x86/kvm/vmx.c:10661 at addr ffff880063b62f68
Read of size 4 by task a.out/2998
CPU: 0 PID: 2998 Comm: a.out Not tainted 4.10.0+ #297
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x2fb/0x3fd lib/dump_stack.c:52
kasan_object_err+0x1c/0x90 mm/kasan/report.c:166
print_address_description mm/kasan/report.c:208 [inline]
kasan_report_error mm/kasan/report.c:292 [inline]
kasan_report.part.2+0x1b0/0x460 mm/kasan/report.c:314
kasan_report mm/kasan/report.c:346 [inline]
__asan_report_load_n_noabort+0x24/0x30 mm/kasan/report.c:345
nested_cpu_has_preemption_timer arch/x86/kvm/vmx.c:1347 [inline]
vmx_check_nested_events+0x6ab/0x720 arch/x86/kvm/vmx.c:10661
kvm_vcpu_running arch/x86/kvm/x86.c:7031 [inline]
vcpu_run arch/x86/kvm/x86.c:7045 [inline]
kvm_arch_vcpu_ioctl_run+0x33e/0x4840 arch/x86/kvm/x86.c:7207
kvm_vcpu_ioctl+0x673/0x1120 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2572
vfs_ioctl fs/ioctl.c:45 [inline]
do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
SYSC_ioctl fs/ioctl.c:700 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
entry_SYSCALL_64_fastpath+0x1f/0xc2
RIP: 0033:0x450199
RSP: 002b:00007efc5fbcfcd8 EFLAGS: 00000297 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 0000000000450199
RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000005
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000297 R12: 0000000000000000
R13: 0000000000000000 R14: 00007efc5fbd09c0 R15: 00007efc5fbd0700
Object at ffff880063b62c80, in cache kmalloc-4096 size: 4096
Allocated:
PID = 2990
save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:59
save_stack+0x43/0xd0 mm/kasan/kasan.c:513
set_track mm/kasan/kasan.c:525 [inline]
kasan_kmalloc+0xaa/0xd0 mm/kasan/kasan.c:616
kmem_cache_alloc_trace+0x10b/0x6e0 mm/slab.c:3638
kmalloc include/linux/slab.h:490 [inline]
enter_vmx_operation arch/x86/kvm/vmx.c:7062 [inline]
handle_vmon+0x3a4/0x6f0 arch/x86/kvm/vmx.c:7150
vmx_handle_exit+0xfc0/0x3f00 arch/x86/kvm/vmx.c:8528
vcpu_enter_guest arch/x86/kvm/x86.c:6984 [inline]
vcpu_run arch/x86/kvm/x86.c:7046 [inline]
kvm_arch_vcpu_ioctl_run+0x1418/0x4840 arch/x86/kvm/x86.c:7207
kvm_vcpu_ioctl+0x673/0x1120 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2572
vfs_ioctl fs/ioctl.c:45 [inline]
do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
SYSC_ioctl fs/ioctl.c:700 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
entry_SYSCALL_64_fastpath+0x1f/0xc2
Freed:
PID = 2991
save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:59
save_stack+0x43/0xd0 mm/kasan/kasan.c:513
set_track mm/kasan/kasan.c:525 [inline]
kasan_slab_free+0x6f/0xb0 mm/kasan/kasan.c:589
__cache_free mm/slab.c:3514 [inline]
kfree+0xd3/0x250 mm/slab.c:3831
free_nested.part.79+0x2f6/0xc50 arch/x86/kvm/vmx.c:7239
vmx_leave_nested arch/x86/kvm/vmx.c:3257 [inline]
vmx_set_msr+0x69d/0x1950 arch/x86/kvm/vmx.c:3325
kvm_set_msr+0xd4/0x170 arch/x86/kvm/x86.c:1101
do_set_msr+0x11e/0x190 arch/x86/kvm/x86.c:1130
__msr_io arch/x86/kvm/x86.c:2579 [inline]
msr_io+0x24b/0x450 arch/x86/kvm/x86.c:2616
kvm_arch_vcpu_ioctl+0x35b/0x46a0 arch/x86/kvm/x86.c:3499
kvm_vcpu_ioctl+0x232/0x1120 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2723
vfs_ioctl fs/ioctl.c:45 [inline]
do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:685
SYSC_ioctl fs/ioctl.c:700 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
entry_SYSCALL_64_fastpath+0x1f/0xc2
Memory state around the buggy address:
ffff880063b62e00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff880063b62e80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff880063b62f00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff880063b62f80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff880063b63000: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
On commit 4bdb75599690f0759b06adfc80d1bcf42e056473 with the following
local diff:
https://gist.githubusercontent.com/dvyukov/44429ac8fe26c43cd324bcc1212245d3/raw/206599fa169ea6b64a8def98fa3bb2fb1e4bc874/gistfile1.txt
Sometimes it also causes (report on commit
44b4b461a0fb407507b46ea76a71376d74de7058):
BUG: KASAN: use-after-free in vmcs12_guest_cr0
arch/x86/kvm/vmx.c:10649 [inline] at addr ffff8800658d6d68
BUG: KASAN: use-after-free in prepare_vmcs12 arch/x86/kvm/vmx.c:10775
[inline] at addr ffff8800658d6d68
BUG: KASAN: use-after-free in nested_vmx_vmexit+0x6c24/0x74d0
arch/x86/kvm/vmx.c:11080 at addr ffff8800658d6d68
Read of size 8 by task a.out/2926
CPU: 2 PID: 2926 Comm: a.out Not tainted 4.10.0-rc4+ #181
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:15 [inline]
dump_stack+0x2ee/0x3ef lib/dump_stack.c:51
kasan_object_err+0x1c/0x70 mm/kasan/report.c:165
print_address_description mm/kasan/report.c:203 [inline]
kasan_report_error mm/kasan/report.c:287 [inline]
kasan_report+0x1b6/0x460 mm/kasan/report.c:307
__asan_report_load_n_noabort+0xf/0x20 mm/kasan/report.c:343
vmcs12_guest_cr0 arch/x86/kvm/vmx.c:10649 [inline]
prepare_vmcs12 arch/x86/kvm/vmx.c:10775 [inline]
nested_vmx_vmexit+0x6c24/0x74d0 arch/x86/kvm/vmx.c:11080
vmx_handle_exit+0xf82/0x3fc0 arch/x86/kvm/vmx.c:8571
vcpu_enter_guest arch/x86/kvm/x86.c:6905 [inline]
vcpu_run arch/x86/kvm/x86.c:6964 [inline]
kvm_arch_vcpu_ioctl_run+0xf7e/0x4890 arch/x86/kvm/x86.c:7122
kvm_vcpu_ioctl+0x673/0x1120 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2570
vfs_ioctl fs/ioctl.c:43 [inline]
do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:683
SYSC_ioctl fs/ioctl.c:698 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:689
entry_SYSCALL_64_fastpath+0x1f/0xc2
RIP: 0033:0x450199
RSP: 002b:00007f8307392cd8 EFLAGS: 00000297 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 0000000000450199
RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000005
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000297 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f83073939c0 R15: 00007f8307393700
Object at ffff8800658d6bc0, in cache kmalloc-4096 size: 4096
Allocated:
PID = 2918
[<ffffffff812b2686>] save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57
[<ffffffff81a0e8c3>] save_stack+0x43/0xd0 mm/kasan/kasan.c:502
[<ffffffff81a0eb8a>] set_track mm/kasan/kasan.c:514 [inline]
[<ffffffff81a0eb8a>] kasan_kmalloc+0xaa/0xd0 mm/kasan/kasan.c:605
[<ffffffff81a0b3db>] kmem_cache_alloc_trace+0x10b/0x670 mm/slab.c:3629
[<ffffffff811bc9cd>] kmalloc include/linux/slab.h:490 [inline]
[<ffffffff811bc9cd>] handle_vmon+0x35d/0x790 arch/x86/kvm/vmx.c:7230
[<ffffffff811d7d66>] vmx_handle_exit+0xf96/0x3fc0 arch/x86/kvm/vmx.c:8634
[<ffffffff810f045e>] vcpu_enter_guest arch/x86/kvm/x86.c:6905 [inline]
[<ffffffff810f045e>] vcpu_run arch/x86/kvm/x86.c:6964 [inline]
[<ffffffff810f045e>] kvm_arch_vcpu_ioctl_run+0xf7e/0x4890
arch/x86/kvm/x86.c:7122
[<ffffffff8107a8a3>] kvm_vcpu_ioctl+0x673/0x1120
arch/x86/kvm/../../../virt/kvm/kvm_main.c:2570
[<ffffffff81aa5aaf>] vfs_ioctl fs/ioctl.c:43 [inline]
[<ffffffff81aa5aaf>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:683
[<ffffffff81aa710f>] SYSC_ioctl fs/ioctl.c:698 [inline]
[<ffffffff81aa710f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:689
[<ffffffff841cacc1>] entry_SYSCALL_64_fastpath+0x1f/0xc2
Freed:
PID = 2919
[<ffffffff812b2686>] save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57
[<ffffffff81a0e8c3>] save_stack+0x43/0xd0 mm/kasan/kasan.c:502
[<ffffffff81a0f1ff>] set_track mm/kasan/kasan.c:514 [inline]
[<ffffffff81a0f1ff>] kasan_slab_free+0x6f/0xb0 mm/kasan/kasan.c:578
[<ffffffff81a0d0b3>] __cache_free mm/slab.c:3505 [inline]
[<ffffffff81a0d0b3>] kfree+0xd3/0x250 mm/slab.c:3822
[<ffffffff811b6c26>] free_nested.part.83+0x2f6/0xc60 arch/x86/kvm/vmx.c:7348
[<ffffffff811eae95>] vmx_leave_nested arch/x86/kvm/vmx.c:3314 [inline]
[<ffffffff811eae95>] vmx_set_msr+0x665/0x1910 arch/x86/kvm/vmx.c:3381
[<ffffffff810960b4>] kvm_set_msr+0xd4/0x170 arch/x86/kvm/x86.c:1097
[<ffffffff8109644e>] do_set_msr+0x11e/0x190 arch/x86/kvm/x86.c:1126
[<ffffffff810c755b>] __msr_io arch/x86/kvm/x86.c:2544 [inline]
[<ffffffff810c755b>] msr_io+0x24b/0x450 arch/x86/kvm/x86.c:2581
[<ffffffff810db4bb>] kvm_arch_vcpu_ioctl+0x35b/0x46e0 arch/x86/kvm/x86.c:3462
[<ffffffff8107a462>] kvm_vcpu_ioctl+0x232/0x1120
arch/x86/kvm/../../../virt/kvm/kvm_main.c:2721
[<ffffffff81aa5aaf>] vfs_ioctl fs/ioctl.c:43 [inline]
[<ffffffff81aa5aaf>] do_vfs_ioctl+0x1bf/0x1790 fs/ioctl.c:683
[<ffffffff81aa710f>] SYSC_ioctl fs/ioctl.c:698 [inline]
[<ffffffff81aa710f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:689
[<ffffffff841cacc1>] entry_SYSCALL_64_fastpath+0x1f/0xc2
Jim noted both paths are protected by vcpu run lock, so this is
probably not a low-level race but rather a leftover dangling
reference.
nested_vmx_run is called only from handle_vmlaunch/vmresume. Could we
exit from L2, release vcpu mutex, return to userspace, at this point
cached_vmcs12 is freed, then we reacquire vcpu mutex and re-enter
directly into L2? Looking at the report, it looks like what happened
-- VMXON and UAF happened in different threads, so we obviously
returned to userspace and dropped the mutex in between. And then
somehow get into nested_vmx_vmexit which means that leave_guest_mode
wasn't called after VMXON.
Is it possible to return to userspace from vmx_handle_exit without
leaving guest mode?
Powered by blists - more mailing lists