[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAhV-H6OR_HYSF451vSk_qSt1a6froSPZKY-=YSRBQgww5a+0A@mail.gmail.com>
Date: Sat, 12 Oct 2024 10:16:01 +0800
From: Huacai Chen <chenhuacai@...nel.org>
To: Bibo Mao <maobibo@...ngson.cn>
Cc: Andrey Ryabinin <ryabinin.a.a@...il.com>, Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>, Barry Song <baohua@...nel.org>, loongarch@...ts.linux.dev,
linux-kernel@...r.kernel.org, kasan-dev@...glegroups.com, linux-mm@...ck.org
Subject: Re: [PATCH 3/4] LoongArch: Add barrier between set_pte and memory access
Hi, Bibo,
On Thu, Oct 10, 2024 at 11:50 AM Bibo Mao <maobibo@...ngson.cn> wrote:
>
> It is possible to return a spurious fault if memory is accessed
> right after the pte is set. For user address space, pte is set
> in kernel space and memory is accessed in user space, there is
> long time for synchronization, no barrier needed. However for
> kernel address space, it is possible that memory is accessed
> right after the pte is set.
>
> Here flush_cache_vmap/flush_cache_vmap_early is used for
> synchronization.
>
> Signed-off-by: Bibo Mao <maobibo@...ngson.cn>
> ---
> arch/loongarch/include/asm/cacheflush.h | 14 +++++++++++++-
> 1 file changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h
> index f8754d08a31a..53be231319ef 100644
> --- a/arch/loongarch/include/asm/cacheflush.h
> +++ b/arch/loongarch/include/asm/cacheflush.h
> @@ -42,12 +42,24 @@ void local_flush_icache_range(unsigned long start, unsigned long end);
> #define flush_cache_dup_mm(mm) do { } while (0)
> #define flush_cache_range(vma, start, end) do { } while (0)
> #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
> -#define flush_cache_vmap(start, end) do { } while (0)
> #define flush_cache_vunmap(start, end) do { } while (0)
> #define flush_icache_user_page(vma, page, addr, len) do { } while (0)
> #define flush_dcache_mmap_lock(mapping) do { } while (0)
> #define flush_dcache_mmap_unlock(mapping) do { } while (0)
>
> +/*
> + * It is possible for a kernel virtual mapping access to return a spurious
> + * fault if it's accessed right after the pte is set. The page fault handler
> + * does not expect this type of fault. flush_cache_vmap is not exactly the
> + * right place to put this, but it seems to work well enough.
> + */
> +static inline void flush_cache_vmap(unsigned long start, unsigned long end)
> +{
> + smp_mb();
> +}
I don't know whether this is the best API to do this, and I think
flush_cache_vunmap() also should be a smp_mb().
Huacai
> +#define flush_cache_vmap flush_cache_vmap
> +#define flush_cache_vmap_early flush_cache_vmap
> +
> #define cache_op(op, addr) \
> __asm__ __volatile__( \
> " cacop %0, %1 \n" \
> --
> 2.39.3
>
Powered by blists - more mailing lists