[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aUJUbcyz2DXmphtU@yilunxu-OptiPlex-7050>
Date: Wed, 17 Dec 2025 14:57:49 +0800
From: Xu Yilun <yilun.xu@...ux.intel.com>
To: Chao Gao <chao.gao@...el.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
Kiryl Shutsemau <kas@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
linux-coco@...ts.linux.dev, kvm@...r.kernel.org,
Dan Williams <dan.j.williams@...el.com>
Subject: Re: [PATCH v2 2/7] KVM: x86: Extract VMXON and EFER.SVME enablement
to kernel
> >+#define x86_virt_call(fn) \
> >+({ \
> >+ int __r; \
> >+ \
> >+ if (IS_ENABLED(CONFIG_KVM_INTEL) && \
> >+ cpu_feature_enabled(X86_FEATURE_VMX)) \
> >+ __r = x86_vmx_##fn(); \
> >+ else if (IS_ENABLED(CONFIG_KVM_AMD) && \
> >+ cpu_feature_enabled(X86_FEATURE_SVM)) \
> >+ __r = x86_svm_##fn(); \
> >+ else \
> >+ __r = -EOPNOTSUPP; \
> >+ \
> >+ __r; \
> >+})
> >+
> >+int x86_virt_get_cpu(int feat)
> >+{
> >+ int r;
> >+
> >+ if (!x86_virt_feature || x86_virt_feature != feat)
> >+ return -EOPNOTSUPP;
> >+
> >+ if (this_cpu_inc_return(virtualization_nr_users) > 1)
> >+ return 0;
>
> Should we assert that preemption is disabled? Calling this API when preemption
> is enabled is wrong.
>
> Maybe use __this_cpu_inc_return(), which already verifies preemption status.
>
Is it better we explicitly assert the preemption for x86_virt_get_cpu()
rather than embed the check in __this_cpu_inc_return()? We are not just
protecting the racing for the reference counter. We should ensure the
"counter increase + x86_virt_call(get_cpu)" can't be preempted.
Thanks,
Yilun
Powered by blists - more mailing lists