lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 11 Aug 2021 10:06:43 +0200 From: Paolo Bonzini <pbonzini@...hat.com> To: Maxim Levitsky <mlevitsk@...hat.com>, kvm@...r.kernel.org Cc: Jim Mattson <jmattson@...gle.com>, linux-kernel@...r.kernel.org, Wanpeng Li <wanpengli@...cent.com>, Borislav Petkov <bp@...en8.de>, Joerg Roedel <joro@...tes.org>, Suravee Suthikulpanit <suravee.suthikulpanit@....com>, "H. Peter Anvin" <hpa@...or.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Vitaly Kuznetsov <vkuznets@...hat.com>, Sean Christopherson <seanjc@...gle.com>, "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org> Subject: Re: [PATCH v4 00/16] My AVIC patch queue On 10/08/21 23:21, Maxim Levitsky wrote: > On Tue, 2021-08-10 at 23:52 +0300, Maxim Levitsky wrote: >> Hi! >> >> This is a series of bugfixes to the AVIC dynamic inhibition, which was >> made while trying to fix bugs as much as possible in this area and trying >> to make the AVIC+SYNIC conditional enablement work. >> >> * Patches 1,3-8 are code from Sean Christopherson which > > I mean patches 1,4-8. I forgot about patch 3 which I also added, > which just added a comment about parameters of the kvm_flush_remote_tlbs_with_address. > > Best regards, > Maxim Levitsky > >> implement an alternative approach of inhibiting AVIC without >> disabling its memslot. >> >> V4: addressed review feedback. >> >> * Patch 2 is new and it fixes a bug in kvm_flush_remote_tlbs_with_address >> >> * Patches 9-10 in this series fix a race condition which can cause >> a lost write from a guest to APIC when the APIC write races >> the AVIC un-inhibition, and add a warning to catch this problem >> if it re-emerges again. >> >> V4: applied review feedback from Paolo >> >> * Patch 11 is the patch from Vitaly about allowing AVIC with SYNC >> as long as the guest doesn’t use the AutoEOI feature. I only slightly >> changed it to expose the AutoEOI cpuid bit regardless of AVIC enablement. >> >> V4: fixed a race that Paolo pointed out. >> >> * Patch 12 is a refactoring that is now possible in SVM AVIC inhibition code, >> because the RCU lock is not dropped anymore. >> >> * Patch 13-15 fixes another issue I found in AVIC inhibit code: >> >> Currently avic_vcpu_load/avic_vcpu_put are called on userspace entry/exit >> from KVM (aka kvm_vcpu_get/kvm_vcpu_put), and these functions update the >> "is running" bit in the AVIC physical ID remap table and update the >> target vCPU in iommu code. >> >> However both of these functions don't do anything when AVIC is inhibited >> thus the "is running" bit will be kept enabled during the exit to userspace. >> This shouldn't be a big issue as the caller >> doesn't use the AVIC when inhibited but still inconsistent and can trigger >> a warning about this in avic_vcpu_load. >> >> To be on the safe side I think it makes sense to call >> avic_vcpu_put/avic_vcpu_load when inhibiting/uninhibiting the AVIC. >> This will ensure that the work these functions do is matched. >> >> V4: I splitted a single patch to 3 patches to make it easier >> to review, and applied Paolo's review feedback. >> >> * Patch 16 removes the pointless APIC base >> relocation from AVIC to make it consistent with the rest of KVM. >> >> (both AVIC and APICv only support default base, while regular KVM, >> sort of support any APIC base as long as it is not RAM. >> If guest attempts to relocate APIC base to non RAM area, >> while APICv/AVIC are active, the new base will be non accelerated, >> while the default base will continue to be AVIC/APICv backed). >> >> On top of that if guest uses different APIC bases on different vCPUs, >> KVM doesn't honour the fact that the MMIO range should only be active >> on that vCPU. No problem, b4 diff is my friend. :) Queued, thanks. Paolo
Powered by blists - more mailing lists