[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1aafe2de083266a294d98393a6a6692320b7a284.camel@redhat.com>
Date: Mon, 15 Jul 2024 22:21:30 -0400
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org
Cc: Dave Hansen <dave.hansen@...ux.intel.com>, Thomas Gleixner
<tglx@...utronix.de>, Borislav Petkov <bp@...en8.de>, x86@...nel.org,
linux-kernel@...r.kernel.org, Sean Christopherson <seanjc@...gle.com>, Ingo
Molnar <mingo@...hat.com>, "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH 0/2] Fix for a very old KVM bug in the segment cache
On Sat, 2024-07-13 at 12:22 +0200, Paolo Bonzini wrote:
> On 7/13/24 03:38, Maxim Levitsky wrote:
> > 1. Getting rid of the segment cache. I am not sure how much it helps
> > these days - this code is very old.
> >
> > 2. Using a read/write lock - IMHO the cleanest solution but might
> > also affect performance.
>
> A read/write lock would cause a deadlock between the writer and the
> sched_out callback, since they run on the same CPU.
>
> I think the root cause of the issue is that clearing the cache should be
> done _after_ the writes (and should have a barrier() at the beginning,
> if only for cleanliness). So your patch 1 should leave the clearing of
> vmx->segment_cache.bitmask where it was.
>
> However, that would still leave an assumption: that it's okay that a
> sched_out during vmx_vcpu_reset() (or other functions that write segment
> data in the VMCS) accesses stale data, as long as the stale data is not
> used after vmx_vcpu_reset() returns. Your patch is a safer approach,
> but maybe wrap preempt_disable()/preempt_enable() with
>
> vmx_invalidate_segment_cache_start() {
> preempt_disable();
> }
> vmx_invalidate_segment_cache_end() {
> vmx->segment_cache.bitmask = 0;
> preempt_enable();
> }
>
> Paolo
>
Hi Paolo!
This looks like a very good idea, I'll do this in v2.
Thanks,
Best regards,
Maxim Levitsky
Powered by blists - more mailing lists