[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <D8C00BDA-160D-40CE-AFBD-9488F85E76CE@linux.dev>
Date: Wed, 17 Aug 2022 10:53:08 +0800
From: Muchun Song <muchun.song@...ux.dev>
To: Miaohe Lin <linmiaohe@...wei.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Muchun Song <songmuchun@...edance.com>,
Linux MM <linux-mm@...ck.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/6] mm: hugetlb_vmemmap: add missing smp_wmb() before
set_pte_at()
> On Aug 16, 2022, at 21:05, Miaohe Lin <linmiaohe@...wei.com> wrote:
>
> The memory barrier smp_wmb() is needed to make sure that preceding stores
> to the page contents become visible before the below set_pte_at() write.
I’m not sure if you are right. I think it is set_pte_at()’s responsibility.
Take arm64 (since it is a Relaxed Memory Order model) as an example (the
following code snippet is set_pte()), I see a barrier guarantee. So I am
curious what issues you are facing. So I want to know the basis for you to
do this change.
static inline void set_pte(pte_t *ptep, pte_t pte)
{
*ptep = pte;
/*
* Only if the new pte is valid and kernel, otherwise TLB maintenance
* or update_mmu_cache() have the necessary barriers.
*/
if (pte_valid_not_user(pte)) {
dsb(ishst);
isb();
}
}
Thanks.
>
> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
> ---
> mm/hugetlb_vmemmap.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 20f414c0379f..76b2d03a0d8d 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -287,6 +287,11 @@ static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
> copy_page(to, (void *)walk->reuse_addr);
> reset_struct_pages(to);
>
> + /*
> + * Makes sure that preceding stores to the page contents become visible
> + * before the set_pte_at() write.
> + */
> + smp_wmb();
> set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
> }
>
> --
> 2.23.0
>
>
Powered by blists - more mailing lists