[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAhV-H6CzPAxwymk16NfjPGO=oi+iBZJYsdSMiyp2N2cDsw54g@mail.gmail.com>
Date: Mon, 24 Jun 2024 09:56:24 +0800
From: Huacai Chen <chenhuacai@...nel.org>
To: maobibo <maobibo@...ngson.cn>
Cc: Tianrui Zhao <zhaotianrui@...ngson.cn>, WANG Xuerui <kernel@...0n.name>,
Sean Christopherson <seanjc@...gle.com>, kvm@...r.kernel.org, loongarch@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 4/6] LoongArch: KVM: Add memory barrier before update
pmd entry
On Mon, Jun 24, 2024 at 9:37 AM maobibo <maobibo@...ngson.cn> wrote:
>
>
>
> On 2024/6/23 下午6:18, Huacai Chen wrote:
> > Hi, Bibo,
> >
> > On Wed, Jun 19, 2024 at 4:09 PM Bibo Mao <maobibo@...ngson.cn> wrote:
> >>
> >> When updating pmd entry such as allocating new pmd page or splitting
> >> huge page into normal page, it is necessary to firstly update all pte
> >> entries, and then update pmd entry.
> >>
> >> It is weak order with LoongArch system, there will be problem if other
> >> vcpus sees pmd update firstly however pte is not updated. Here smp_wmb()
> >> is added to assure this.
> > Memory barriers should be in pairs in most cases. That means you may
> > lose smp_rmb() in another place.
> The idea adding smp_wmb() comes from function __split_huge_pmd_locked()
> in file mm/huge_memory.c, and the explanation is reasonable.
>
> ...
> set_ptes(mm, haddr, pte, entry, HPAGE_PMD_NR);
> }
> ...
> smp_wmb(); /* make pte visible before pmd */
> pmd_populate(mm, pmd, pgtable);
>
> It is strange that why smp_rmb() should be in pairs with smp_wmb(),
> I never hear this rule -:(
https://docs.kernel.org/core-api/wrappers/memory-barriers.html
SMP BARRIER PAIRING
-------------------
When dealing with CPU-CPU interactions, certain types of memory barrier should
always be paired. A lack of appropriate pairing is almost certainly an error.
Huacai
>
> Regards
> Bibo Mao
> >
> > Huacai
> >
> >>
> >> Signed-off-by: Bibo Mao <maobibo@...ngson.cn>
> >> ---
> >> arch/loongarch/kvm/mmu.c | 2 ++
> >> 1 file changed, 2 insertions(+)
> >>
> >> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
> >> index 1690828bd44b..7f04edfbe428 100644
> >> --- a/arch/loongarch/kvm/mmu.c
> >> +++ b/arch/loongarch/kvm/mmu.c
> >> @@ -163,6 +163,7 @@ static kvm_pte_t *kvm_populate_gpa(struct kvm *kvm,
> >>
> >> child = kvm_mmu_memory_cache_alloc(cache);
> >> _kvm_pte_init(child, ctx.invalid_ptes[ctx.level - 1]);
> >> + smp_wmb(); /* make pte visible before pmd */
> >> kvm_set_pte(entry, __pa(child));
> >> } else if (kvm_pte_huge(*entry)) {
> >> return entry;
> >> @@ -746,6 +747,7 @@ static kvm_pte_t *kvm_split_huge(struct kvm_vcpu *vcpu, kvm_pte_t *ptep, gfn_t g
> >> val += PAGE_SIZE;
> >> }
> >>
> >> + smp_wmb();
> >> /* The later kvm_flush_tlb_gpa() will flush hugepage tlb */
> >> kvm_set_pte(ptep, __pa(child));
> >>
> >> --
> >> 2.39.3
> >>
>
>
Powered by blists - more mailing lists