lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <Y9R1w8kfQjCNnEfl@google.com> Date: Sat, 28 Jan 2023 01:09:23 +0000 From: Sean Christopherson <seanjc@...gle.com> To: Maxim Levitsky <mlevitsk@...hat.com> Cc: kvm@...r.kernel.org, Sandipan Das <sandipan.das@....com>, Paolo Bonzini <pbonzini@...hat.com>, Jim Mattson <jmattson@...gle.com>, Peter Zijlstra <peterz@...radead.org>, Dave Hansen <dave.hansen@...ux.intel.com>, Borislav Petkov <bp@...en8.de>, Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Josh Poimboeuf <jpoimboe@...nel.org>, Daniel Sneddon <daniel.sneddon@...ux.intel.com>, Jiaxi Chen <jiaxi.chen@...ux.intel.com>, Babu Moger <babu.moger@....com>, linux-kernel@...r.kernel.org, Jing Liu <jing2.liu@...el.com>, Wyes Karny <wyes.karny@....com>, x86@...nel.org, "H. Peter Anvin" <hpa@...or.com> Subject: Re: [PATCH v2 07/11] KVM: x86: add a delayed hardware NMI injection interface On Tue, Nov 29, 2022, Maxim Levitsky wrote: > This patch adds two new vendor callbacks: No "this patch" please, just say what it does. > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 684a5519812fb2..46993ce61c92db 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -871,8 +871,13 @@ struct kvm_vcpu_arch { > u64 tsc_scaling_ratio; /* current scaling ratio */ > > atomic_t nmi_queued; /* unprocessed asynchronous NMIs */ > - unsigned nmi_pending; /* NMI queued after currently running handler */ > + > + unsigned int nmi_pending; /* > + * NMI queued after currently running handler > + * (not including a hardware pending NMI (e.g vNMI)) > + */ Put the block comment above. I'd say collapse all of the comments about NMIs into a single big block comment. > bool nmi_injected; /* Trying to inject an NMI this entry */ > + > bool smi_pending; /* SMI queued after currently running handler */ > u8 handling_intr_from_guest; > > @@ -10015,13 +10022,34 @@ static void process_nmi(struct kvm_vcpu *vcpu) > * Otherwise, allow two (and we'll inject the first one immediately). > */ > if (static_call(kvm_x86_get_nmi_mask)(vcpu) || vcpu->arch.nmi_injected) > - limit = 1; > + limit--; > + > + /* Also if there is already a NMI hardware queued to be injected, > + * decrease the limit again > + */ /* * Block comment ... */ > + if (static_call(kvm_x86_get_hw_nmi_pending)(vcpu)) I'd prefer "is_hw_nmi_pending()" over "get", even if it means not pairing with "set". Though I think that's a good thing since they aren't perfect pairs. > + limit--; > > - vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0); > + if (limit <= 0) > + return; > + > + /* Attempt to use hardware NMI queueing */ > + if (static_call(kvm_x86_set_hw_nmi_pending)(vcpu)) { > + limit--; > + nmi_to_queue--; > + } > + > + vcpu->arch.nmi_pending += nmi_to_queue; > vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit); > kvm_make_request(KVM_REQ_EVENT, vcpu); > } > > +/* Return total number of NMIs pending injection to the VM */ > +int kvm_get_total_nmi_pending(struct kvm_vcpu *vcpu) > +{ > + return vcpu->arch.nmi_pending + static_call(kvm_x86_get_hw_nmi_pending)(vcpu); Nothing cares about the total count, this can just be; bool kvm_is_nmi_pending(struct kvm_vcpu *vcpu) { return vcpu->arch.nmi_pending || static_call(kvm_x86_is_hw_nmi_pending)(vcpu); } > +} > + > void kvm_make_scan_ioapic_request_mask(struct kvm *kvm, > unsigned long *vcpu_bitmap) > { > -- > 2.26.3 >
Powered by blists - more mailing lists