lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <7f18cfd048609276cc298dbfa01628bd2fa15937.camel@redhat.com> Date: Wed, 23 Feb 2022 18:32:52 +0200 From: Maxim Levitsky <mlevitsk@...hat.com> To: Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org, kvm@...r.kernel.org Cc: seanjc@...gle.com Subject: Re: [PATCH v2 12/18] KVM: x86/mmu: clear MMIO cache when unloading the MMU On Thu, 2022-02-17 at 16:03 -0500, Paolo Bonzini wrote: > For cleanliness, do not leave a stale GVA in the cache after all the roots are > cleared. In practice, kvm_mmu_load will go through kvm_mmu_sync_roots if > paging is on, and will not use vcpu_match_mmio_gva at all if paging is off. > However, leaving data in the cache might cause bugs in the future. > > Signed-off-by: Paolo Bonzini <pbonzini@...hat.com> > --- > arch/x86/kvm/mmu/mmu.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index b01160716c6a..4e8e3e9530ca 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -5111,6 +5111,7 @@ void kvm_mmu_unload(struct kvm_vcpu *vcpu) > { > __kvm_mmu_unload(vcpu->kvm, &vcpu->arch.root_mmu); > __kvm_mmu_unload(vcpu->kvm, &vcpu->arch.guest_mmu); > + vcpu_clear_mmio_info(vcpu, MMIO_GVA_ANY); > } > > static bool need_remote_flush(u64 old, u64 new) One thing that bothers me for a while with all of this is that vcpu->arch.{mmio_gva|mmio_access|mmio_gfn|mmio_gen} are often called mmio cache, while we also install reserved bit SPTEs and also call this a mmio cache. The above is basically a cache of a cache sort of. Reviewed-by: Maxim Levitsky <mlevitsk@...hat.com> Best regards, Maxim Levitsky
Powered by blists - more mailing lists