[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1b46d531-6423-3ccc-fc5f-df6fbaa02557@redhat.com>
Date: Thu, 14 Nov 2019 13:16:21 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>,
Radim Krčmář <rkrcmar@...hat.com>
Cc: Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: x86/mmu: Take slots_lock when using
kvm_mmu_zap_all_fast()
On 13/11/19 20:30, Sean Christopherson wrote:
> Failing to take slots_lock when toggling nx_huge_pages allows multiple
> instances of kvm_mmu_zap_all_fast() to run concurrently, as the other
> user, KVM_SET_USER_MEMORY_REGION, does not take the global kvm_lock.
> Concurrent fast zap instances causes obsolete shadow pages to be
> incorrectly identified as valid due to the single bit generation number
> wrapping, which results in stale shadow pages being left in KVM's MMU
> and leads to all sorts of undesirable behavior.
Indeed the current code fails lockdep miserably, but isn't the whole
body of kvm_mmu_zap_all_fast() covered by kvm->mmu_lock? What kind of
badness can happen if kvm->slots_lock isn't taken?
Paolo
Powered by blists - more mailing lists