[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+ZyPq3kQBMFs-=AX34=+-ze+2UrSAapCKrgBUYw4gJD+w@mail.gmail.com>
Date: Tue, 17 Jan 2017 17:00:23 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Radim Krčmář <rkrcmar@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
"x86@...nel.org" <x86@...nel.org>, KVM list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Alan Stern <stern@...land.harvard.edu>,
Steve Rutherford <srutherford@...gle.com>,
Xiao Guangrong <guangrong.xiao@...ux.intel.com>,
haozhong.zhang@...el.com, syzkaller <syzkaller@...glegroups.com>
Subject: Re: kvm: WARNING in mmu_spte_clear_track_bits
On Tue, Jan 17, 2017 at 4:20 PM, Paolo Bonzini <pbonzini@...hat.com> wrote:
>
>
> On 13/01/2017 12:15, Dmitry Vyukov wrote:
>>
>> I've commented out the WARNING for now, but I am seeing lots of
>> use-after-free's and rcu stalls involving mmu_spte_clear_track_bits:
>>
>>
>> BUG: KASAN: use-after-free in mmu_spte_clear_track_bits+0x186/0x190
>> arch/x86/kvm/mmu.c:597 at addr ffff880068ae2008
>> Read of size 8 by task syz-executor2/16715
>> page:ffffea00016e6170 count:0 mapcount:0 mapping: (null) index:0x0
>> flags: 0x500000000000000()
>> raw: 0500000000000000 0000000000000000 0000000000000000 00000000ffffffff
>> raw: ffffea00017ec5a0 ffffea0001783d48 ffff88006aec5d98
>> page dumped because: kasan: bad access detected
>> CPU: 2 PID: 16715 Comm: syz-executor2 Not tainted 4.10.0-rc3+ #163
>> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
>> Call Trace:
>> __dump_stack lib/dump_stack.c:15 [inline]
>> dump_stack+0x292/0x3a2 lib/dump_stack.c:51
>> kasan_report_error mm/kasan/report.c:213 [inline]
>> kasan_report+0x42d/0x460 mm/kasan/report.c:307
>> __asan_report_load8_noabort+0x14/0x20 mm/kasan/report.c:333
>> mmu_spte_clear_track_bits+0x186/0x190 arch/x86/kvm/mmu.c:597
>> drop_spte+0x24/0x280 arch/x86/kvm/mmu.c:1182
>> kvm_zap_rmapp+0x119/0x260 arch/x86/kvm/mmu.c:1401
>> kvm_unmap_rmapp+0x1d/0x30 arch/x86/kvm/mmu.c:1412
>> kvm_handle_hva_range+0x54a/0x7d0 arch/x86/kvm/mmu.c:1565
>> kvm_unmap_hva_range+0x2e/0x40 arch/x86/kvm/mmu.c:1591
>> kvm_mmu_notifier_invalidate_range_start+0xae/0x140
>> arch/x86/kvm/../../../virt/kvm/kvm_main.c:360
>> __mmu_notifier_invalidate_range_start+0x1f8/0x300 mm/mmu_notifier.c:199
>> mmu_notifier_invalidate_range_start include/linux/mmu_notifier.h:282 [inline]
>> unmap_vmas+0x14b/0x1b0 mm/memory.c:1368
>> unmap_region+0x2f8/0x560 mm/mmap.c:2460
>> do_munmap+0x7b8/0xfa0 mm/mmap.c:2657
>> mmap_region+0x68f/0x18e0 mm/mmap.c:1612
>> do_mmap+0x6a2/0xd40 mm/mmap.c:1450
>> do_mmap_pgoff include/linux/mm.h:2031 [inline]
>> vm_mmap_pgoff+0x1a9/0x200 mm/util.c:305
>> SYSC_mmap_pgoff mm/mmap.c:1500 [inline]
>> SyS_mmap_pgoff+0x22c/0x5d0 mm/mmap.c:1458
>> SYSC_mmap arch/x86/kernel/sys_x86_64.c:95 [inline]
>> SyS_mmap+0x16/0x20 arch/x86/kernel/sys_x86_64.c:86
>> entry_SYSCALL_64_fastpath+0x1f/0xc2
>> RIP: 0033:0x445329
>> RSP: 002b:00007fb33933cb58 EFLAGS: 00000282 ORIG_RAX: 0000000000000009
>> RAX: ffffffffffffffda RBX: 0000000020000000 RCX: 0000000000445329
>> RDX: 0000000000000003 RSI: 0000000000af1000 RDI: 0000000020000000
>> RBP: 00000000006dfe90 R08: ffffffffffffffff R09: 0000000000000000
>> R10: 0000000000000032 R11: 0000000000000282 R12: 0000000000700000
>> R13: 0000000000000006 R14: ffffffffffffffff R15: 0000000020001000
>> Memory state around the buggy address:
>> ffff880068ae1f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>> ffff880068ae1f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> ffff880068ae2000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>> ^
>> ffff880068ae2080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>> ffff880068ae2100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>> ==================================================================
>
> This could be related to the gfn_to_rmap issues.
Humm... That's possible. Potentially I am not seeing any more of
spte-related crashes after I applied the following patch:
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -968,8 +968,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
/* Check for overlaps */
r = -EEXIST;
kvm_for_each_memslot(slot, __kvm_memslots(kvm, as_id)) {
- if ((slot->id >= KVM_USER_MEM_SLOTS) ||
- (slot->id == id))
+ if (slot->id == id)
continue;
if (!((base_gfn + npages <= slot->base_gfn) ||
(base_gfn >= slot->base_gfn + slot->npages)))
Powered by blists - more mailing lists