lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZvRWzXfAV0gN2OZN@google.com>
Date: Wed, 25 Sep 2024 11:30:37 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Maxim Levitsky <mlevitsk@...hat.com>
Cc: Chao Gao <chao.gao@...el.com>, kvm@...r.kernel.org, 
	Dave Hansen <dave.hansen@...ux.intel.com>, Thomas Gleixner <tglx@...utronix.de>, 
	Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, "H. Peter Anvin" <hpa@...or.com>, 
	linux-kernel@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>, x86@...nel.org
Subject: Re: [PATCH v3 2/2] VMX: reset the segment cache after segment
 initialization in vmx_vcpu_reset

On Wed, Sep 25, 2024, Maxim Levitsky wrote:
> On Wed, 2024-09-25 at 08:08 -0700, Sean Christopherson wrote:
> > > Finally all 3 places that read the segment cache, only access one piece of
> > > data (SS.AR or RIP), thus it doesn't really matter if they see an old or a
> > > new value. 
> > > 
> > > I mean in theory if userspace changes the SS's AR bytes out of the blue, and
> > > then we get a preemption event, in theory as you say the old value is correct
> > > but it really doesn't matter.
> > > 
> > > So IMHO, just ensuring that we invalidate the segment cache right after we do
> > > any changes is the simplest solution.
> > 
> > But it's not a very maintainable solution.  It fixes the immediate problem, but
> > doesn't do anything to help ensure that all future code invalidates the cache
> > after writing,
> 
> If we wrap segment cache access with something like segment_cache_access_begin()/end(),
> we can ensure that segment cache is only modified then (with some macros even maybe),
> or that at least it is clear to the developer that all writes should be wrapped by these
> functions.
> 
> I also do think that we should still re-order the segment cache accesses in vmx_vcpu_reset()
> and other places just for the sake of consistency.

Yeah, I've no objection to doing that, I just don't want that to be the long-term
solution.

> >  nor does it guarantee that all future usage of SS.AR can tolerate
> > consuming stale values.
> > 
> > > I can in addition to that add a warning to kvm_register_is_available and
> > > vmx_segment_cache_test_set, that will test that only SS.AR and RIP are read
> > > from the interrupt context, so that if in the future someone attempts to read
> > > more fields, this issue can be re-evaluated.
> > 
> > There's no need to add anything to vmx_segment_cache_test_set(), because it uses
> > kvm_register_is_available().  I.e. adding logic in kvm_register_is_available()
> > will suffice.
> > 
> > If we explicitly allow VMCS accesses from PMI callbacks, which by we *know* can
> > tolerate stale data _and_ never run while KVM is updating segments, then we can
> > fix the preemption case by forcing a VMREAD and bypassing the cache.
> >  
> > And looking to the future, if vcpu->arch.guest_state_protected is moved/exposed
> > to common code in some way, then the common PMI code can skip trying to read guest
> > state, and the ugliness of open coding that check in the preemption path largely
> > goes away.
> This is assuming that most VMs will be protected in the future?

No, the assumption is that other architectures will have VMs with protected guest
state, e.g. ARM's CCA stuff, at which point handling the "don't bother reading
guest RIP" logic in the common PMI handler is worth doing.

> > If you're ok with the idea, I'll write changelogs and post the below (probably over
> > two patches).  I don't love adding another kvm_x86_ops callback, but I couldn't
> > come up with anything less ugly.
> 
> This was one of the reasons I didn't want to write something like that.
> If we indeed only add callback for get_cpl_no_cache, then it is tolerable.

...

> > diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
> > index b1eb46e26b2e..0370483003f6 100644
> > --- a/arch/x86/kvm/kvm_cache_regs.h
> > +++ b/arch/x86/kvm/kvm_cache_regs.h
> > @@ -43,6 +43,18 @@ BUILD_KVM_GPR_ACCESSORS(r14, R14)
> >  BUILD_KVM_GPR_ACCESSORS(r15, R15)
> >  #endif
> >  
> > +/*
> > + * Using the register cache from interrupt context is generally not allowed, as
> > + * caching a register and marking it available/dirty can't be done atomically,
> > + * i.e. accesses from interrupt context may clobber state or read stale data if
> > + * the vCPU task is in the process of updating the cache.  The exception is if
> > + * KVM is handling an IRQ/NMI from guest mode, as that bounded sequence doesn't
> > + * touch the cache, it runs after the cache is reset (post VM-Exit), and PMIs
> > + * need to several registers that are cacheable.
> > + */
> > +#define kvm_assert_register_caching_allowed(vcpu)		\
> > +	lockdep_assert_once(in_task() ||			\
> > +			    READ_ONCE(vcpu->arch.handling_intr_from_guest))
> 
> This is ugly, but on the second thought reasonable, given the circumstances.
> 
> How about using kvm_arch_pmi_in_guest() instead? It is a tiny bit more accurate
> and self-documenting IMHO.

Ah, yeah, good idea.

> Also, how about checking for in_task() in __vmx_get_cpl() and then avoiding the cache?
> This way we will avoid adding a new callback, and in theory if there is more code that
> tries to read CPL from interrupt context, it will work for free. 
> 
> But on the other hand we might actually not want new code to get this for free. 
> Is this the reason you added the callback?

Yeah, exactly.  I actually started coding up using in_task(), but I didn't want
to allow all reads from !in_task() because then it would do the wrong thing for
any usage that isn't tolerant of stale data, i.e. where KVM _should_ read from
the cache.  Even worse, such bugs wouldn't be caught because the in_task() check
would bypass kvm_assert_register_caching_allowed().

I considered guarding the in_task() code with preempt_model_preemptible() so that
non-preemptible kernels would always use the cache, i.e. would detect unexpected
cases.   But that felt like a bit of a hack, and for preempt_dynamic kernels
preempt_model_preemptible() requires a function call, at which point it's entirely
possible that reading from the cache would end up being slower than doing VMREAD.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ