[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7d97ff98-96c5-7699-7b32-36651ebf173d@redhat.com>
Date: Thu, 21 Oct 2021 19:44:40 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Lai Jiangshan <jiangshanlai@...il.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: Lai Jiangshan <laijs@...ux.alibaba.com>,
Junaid Shahid <junaids@...gle.com>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH 3/4] KVM: X86: Use smp_rmb() to pair with smp_wmb() in
mmu_try_to_unsync_pages()
On 19/10/21 13:01, Lai Jiangshan wrote:
> From: Lai Jiangshan<laijs@...ux.alibaba.com>
>
> The commit 578e1c4db2213 ("kvm: x86: Avoid taking MMU lock in
> kvm_mmu_sync_roots if no sync is needed") added smp_wmb() in
> mmu_try_to_unsync_pages(), but the corresponding smp_load_acquire()
> isn't used on the load of SPTE.W which is impossible since the load of
> SPTE.W is performed in the CPU's pagetable walking.
>
> This patch changes to use smp_rmb() instead. This patch fixes nothing
> but just comments since smp_rmb() is NOP and compiler barrier() is not
> required since the load of SPTE.W is before VMEXIT.
I think that even implicit loads during pagetable walking obey read-read
ordering on x86, but this is clearer and it is necessary for patch 4.
Paolo
Powered by blists - more mailing lists