[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALzav=euH_n9WXG29CFd10urh85O4Mw2L=StEizVmh27CYzrtQ@mail.gmail.com>
Date: Fri, 15 Mar 2024 11:07:24 -0700
From: David Matlack <dmatlack@...gle.com>
To: syzbot <syzbot+900d58a45dcaab9e4821@...kaller.appspotmail.com>
Cc: bp@...en8.de, dave.hansen@...ux.intel.com, hpa@...or.com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org, mingo@...hat.com,
pbonzini@...hat.com, seanjc@...gle.com, syzkaller-bugs@...glegroups.com,
tglx@...utronix.de, x86@...nel.org
Subject: Re: [syzbot] [kvm?] WARNING in clear_dirty_gfn_range
On Tue, Mar 12, 2024 at 4:34 PM syzbot
<syzbot+900d58a45dcaab9e4821@...kaller.appspotmail.com> wrote:
>
> ------------[ cut here ]------------
> WARNING: CPU: 1 PID: 5165 at arch/x86/kvm/mmu/tdp_mmu.c:1526 clear_dirty_gfn_range+0x3d6/0x540 arch/x86/kvm/mmu/tdp_mmu.c:1526
> Modules linked in:
> CPU: 1 PID: 5165 Comm: syz-executor417 Not tainted 6.8.0-syzkaller-01185-g855684c7d938 #0
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> RIP: 0010:clear_dirty_gfn_range+0x3d6/0x540 arch/x86/kvm/mmu/tdp_mmu.c:1526
> Call Trace:
> <TASK>
> kvm_tdp_mmu_clear_dirty_slot+0x24f/0x2e0 arch/x86/kvm/mmu/tdp_mmu.c:1557
> kvm_mmu_slot_leaf_clear_dirty+0x38b/0x490 arch/x86/kvm/mmu/mmu.c:6783
> kvm_mmu_slot_apply_flags arch/x86/kvm/x86.c:12962 [inline]
> kvm_arch_commit_memory_region+0x299/0x490 arch/x86/kvm/x86.c:13031
> kvm_commit_memory_region arch/x86/kvm/../../../virt/kvm/kvm_main.c:1751 [inline]
> kvm_set_memslot+0x4d3/0x13e0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:1994
> __kvm_set_memory_region arch/x86/kvm/../../../virt/kvm/kvm_main.c:2129 [inline]
> __kvm_set_memory_region+0xdbc/0x1520 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2020
> kvm_set_memory_region arch/x86/kvm/../../../virt/kvm/kvm_main.c:2150 [inline]
> kvm_vm_ioctl_set_memory_region arch/x86/kvm/../../../virt/kvm/kvm_main.c:2162 [inline]
> kvm_vm_ioctl+0x151c/0x3e20 arch/x86/kvm/../../../virt/kvm/kvm_main.c:5152
The reproducer uses nested virtualization to launch an L2 with EPT
disabled. This creates a TDP MMU root with role.guest_mode=1, which
triggers the WARN_ON() in kvm_tdp_mmu_clear_dirty_slot() because
kvm_mmu_page_ad_need_write_protect() returns false whenever PML is
enabled and the shadow page role.guest_mode=1.
If I'm reading prepare_vmcs02_constant_state() correctly, we always
disable PML when running in L2. So when enable_pml=1 and L2 runs with
EPT disabled we are blind to dirty tracking L2 accesses.
Powered by blists - more mailing lists