[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1141ad15-26ae-71a9-9f6f-26671d01a30e@loongson.cn>
Date: Sat, 12 Oct 2024 10:48:53 +0800
From: maobibo <maobibo@...ngson.cn>
To: Huacai Chen <chenhuacai@...nel.org>
Cc: Andrey Ryabinin <ryabinin.a.a@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>, Barry Song <baohua@...nel.org>,
loongarch@...ts.linux.dev, linux-kernel@...r.kernel.org,
kasan-dev@...glegroups.com, linux-mm@...ck.org
Subject: Re: [PATCH 3/4] LoongArch: Add barrier between set_pte and memory
access
Huacai,
On 2024/10/12 上午10:16, Huacai Chen wrote:
> Hi, Bibo,
>
> On Thu, Oct 10, 2024 at 11:50 AM Bibo Mao <maobibo@...ngson.cn> wrote:
>>
>> It is possible to return a spurious fault if memory is accessed
>> right after the pte is set. For user address space, pte is set
>> in kernel space and memory is accessed in user space, there is
>> long time for synchronization, no barrier needed. However for
>> kernel address space, it is possible that memory is accessed
>> right after the pte is set.
>>
>> Here flush_cache_vmap/flush_cache_vmap_early is used for
>> synchronization.
>>
>> Signed-off-by: Bibo Mao <maobibo@...ngson.cn>
>> ---
>> arch/loongarch/include/asm/cacheflush.h | 14 +++++++++++++-
>> 1 file changed, 13 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h
>> index f8754d08a31a..53be231319ef 100644
>> --- a/arch/loongarch/include/asm/cacheflush.h
>> +++ b/arch/loongarch/include/asm/cacheflush.h
>> @@ -42,12 +42,24 @@ void local_flush_icache_range(unsigned long start, unsigned long end);
>> #define flush_cache_dup_mm(mm) do { } while (0)
>> #define flush_cache_range(vma, start, end) do { } while (0)
>> #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
>> -#define flush_cache_vmap(start, end) do { } while (0)
>> #define flush_cache_vunmap(start, end) do { } while (0)
>> #define flush_icache_user_page(vma, page, addr, len) do { } while (0)
>> #define flush_dcache_mmap_lock(mapping) do { } while (0)
>> #define flush_dcache_mmap_unlock(mapping) do { } while (0)
>>
>> +/*
>> + * It is possible for a kernel virtual mapping access to return a spurious
>> + * fault if it's accessed right after the pte is set. The page fault handler
>> + * does not expect this type of fault. flush_cache_vmap is not exactly the
>> + * right place to put this, but it seems to work well enough.
>> + */
>> +static inline void flush_cache_vmap(unsigned long start, unsigned long end)
>> +{
>> + smp_mb();
>> +}
> I don't know whether this is the best API to do this, and I think
> flush_cache_vunmap() also should be a smp_mb().
I do not know neither -:(, it seems that flush_cache_vmap() is better
than arch_sync_kernel_mappings(), since function flush_cache_vmap() is
used in vmalloc/kasan/percpu module, however arch_sync_kernel_mappings
is only used in vmalloc.
For flush_cache_vunmap(), it is used before pte_clear(), here is usage
example.
void vunmap_range(unsigned long addr, unsigned long end)
{
flush_cache_vunmap(addr, end);
vunmap_range_noflush(addr, end);
flush_tlb_kernel_range(addr, end);
}
So I think it is not necessary to add smp_mb() in flush_cache_vunmap().
Regards
Bibo Mao
>
>
> Huacai
>
>> +#define flush_cache_vmap flush_cache_vmap
>> +#define flush_cache_vmap_early flush_cache_vmap
>> +
>> #define cache_op(op, addr) \
>> __asm__ __volatile__( \
>> " cacop %0, %1 \n" \
>> --
>> 2.39.3
>>
Powered by blists - more mailing lists