[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210412142203.6d86e5c6@xhacker.debian>
Date: Mon, 12 Apr 2021 14:22:03 +0800
From: Jisheng Zhang <Jisheng.Zhang@...aptics.com>
To: Palmer Dabbelt <palmer@...belt.com>
Cc: liu@...yang.me, alex@...ti.fr, waterman@...s.berkeley.edu,
Paul Walmsley <paul.walmsley@...ive.com>,
aou@...s.berkeley.edu, akpm@...ux-foundation.org,
geert@...ux-m68k.org, linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] implement flush_cache_vmap and flush_cache_vunmap for
RISC-V
On Sun, 11 Apr 2021 14:41:07 -0700 (PDT)
Palmer Dabbelt <palmer@...belt.com> wrote:
>
>
> On Sun, 28 Mar 2021 18:55:09 PDT (-0700), liu@...yang.me wrote:
> > This patch implements flush_cache_vmap and flush_cache_vunmap for
> > RISC-V, since these functions might modify PTE. Without this patch,
> > SFENCE.VMA won't be added to related codes, which might introduce a bug
> > in some out-of-order micro-architecture implementations.
> >
> > Signed-off-by: Jiuyang Liu <liu@...yang.me>
> > ---
> > arch/riscv/include/asm/cacheflush.h | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
> > index 23ff70350992..4adf25248c43 100644
> > --- a/arch/riscv/include/asm/cacheflush.h
> > +++ b/arch/riscv/include/asm/cacheflush.h
> > @@ -8,6 +8,14 @@
> >
> > #include <linux/mm.h>
> >
> > +/*
> > + * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs SFENCE.VMA.
> > + * - flush_cache_vmap is invoked after map_kernel_range() has installed the page table entries.
> > + * - flush_cache_vunmap is invoked before unmap_kernel_range() deletes the page table entries
>
> These should have line breaks.
>
> > + */
> > +#define flush_cache_vmap(start, end) flush_tlb_all()
>
> We shouldn't need cache flushes for permission upgrades: the ISA allows
> the old mappings to be visible until a fence, but the theory is that
> window will be sort for reasonable architectures so the overhead of
> flushing the entire TLB will overwhelm the extra faults. There are a
> handful of places where we preemptively flush, but those are generally
> because we can't handle the faults correctly.
>
> If you have some benchmark that demonstrates a performance issue on real
> hardware here then I'm happy to talk about this further, but this
> assumption is all over arch/riscv so I'd prefer to keep things
> consistent for now.
IMHO the flush_cache_vmap() isn't necessary. From previous discussion, it
seems the reason to implement flush_cache_vmap() is we missed sfence.vma
in vmalloc related code path. But...
The riscv privileged spec says "In particular, if a leaf PTE is modified but
a subsuming SFENCE.VMA is not executed, either the old translation or the
new translation will be used, but the choice is unpredictable. The behavior
is otherwise well-defined"
*If old translation, we do have a page fault, but the vmalloc_fault() will
take care of it, then local_flush_tlb_page() will sfence.vma properly.
*If new translation, we don't do anything.
In both cases, we don't need to implement the flush_cache_vmap()
>From another side, even we insert sfence.vma() in advance rather than
rely on the vmalloc_fault() we still can't ensure other harts use the
new translation. Take below small window case for example:
cpu0 cpu1
map_kernel_range()
map_kernel_range_noflush()
access the new vmalloced space.
flush_cache_vmap()
That's to say, we sill rely on the vmalloc_fault().
>
> > +#define flush_cache_vunmap(start, end) flush_tlb_all()
>
In flush_cache_vunmap() caller's code path, the translation is modified
*after* the flush_cache_vunmap(), for example:
unmap_kernel_range()
flush_cache_vunmap()
vunmap_page_range()
flush_tlb_kernel_range()
IOW, when we call flush_cache_vunmap(), the translation has not changed.
Instead, I believe it's the flush_tlb_kernel_range() to flush the translations
after we changed the translation in vunmap_page_range()
Regards
Powered by blists - more mailing lists