[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZnTOyyl6dfzFV+yS@yzhao56-desk.sh.intel.com>
Date: Fri, 21 Jun 2024 08:52:27 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>
CC: Sean Christopherson <seanjc@...gle.com>, Lai Jiangshan
<jiangshanlai@...il.com>, "Paul E. McKenney" <paulmck@...nel.org>, "Josh
Triplett" <josh@...htriplett.org>, <kvm@...r.kernel.org>,
<rcu@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Kevin Tian
<kevin.tian@...el.com>, Yiwei Zhang <zzyiwei@...gle.com>
Subject: Re: [PATCH 4/5] KVM: x86: Ensure a full memory barrier is emitted in
the VM-Exit path
On Fri, Jun 21, 2024 at 12:38:21AM +0200, Paolo Bonzini wrote:
> On 3/9/24 02:09, Sean Christopherson wrote:
> > From: Yan Zhao <yan.y.zhao@...el.com>
> >
> > Ensure a full memory barrier is emitted in the VM-Exit path, as a full
> > barrier is required on Intel CPUs to evict WC buffers. This will allow
> > unconditionally honoring guest PAT on Intel CPUs that support self-snoop.
> >
> > As srcu_read_lock() is always called in the VM-Exit path and it internally
> > has a smp_mb(), call smp_mb__after_srcu_read_lock() to avoid adding a
> > second fence and make sure smp_mb() is called without dependency on
> > implementation details of srcu_read_lock().
>
> Do you really need mfence or is a locked operation enough? mfence is mb(),
> not smp_mb().
>
A locked operation should be enough, since the barrier here is to evict
partially filled WC buffers.
"
If the WC buffer is partially filled, the writes may be delayed until the next
occurrence of a serializing event; such as an SFENCE or MFENCE instruction,
CPUID or other serializing instruction, a read or write to uncached memory, an
interrupt occurrence, or an execution of a LOCK instruction (including one with
an XACQUIRE or XRELEASE prefix).
"
>
> > + /*
> > + * Call this to ensure WC buffers in guest are evicted after each VM
> > + * Exit, so that the evicted WC writes can be snooped across all cpus
> > + */
> > + smp_mb__after_srcu_read_lock();
>
Powered by blists - more mailing lists