[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17a6582c28ff3a008d3ef960c3e36c0bc7013e33.camel@wdc.com>
Date: Mon, 12 Aug 2019 17:13:36 +0000
From: Atish Patra <Atish.Patra@....com>
To: "troy.benjegerdes@...ive.com" <troy.benjegerdes@...ive.com>
CC: "linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
Anup Patel <Anup.Patel@....com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"rminnich@...il.com" <rminnich@...il.com>,
"paul.walmsley@...ive.com" <paul.walmsley@...ive.com>,
"aou@...s.berkeley.edu" <aou@...s.berkeley.edu>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"allison@...utok.net" <allison@...utok.net>,
"alexios.zavras@...el.com" <alexios.zavras@...el.com>,
"palmer@...ive.com" <palmer@...ive.com>
Subject: Re: [PATCH] RISC-V: Issue a local tlb flush if possible.
On Mon, 2019-08-12 at 10:36 -0500, Troy Benjegerdes wrote:
> > On Aug 9, 2019, at 8:43 PM, Atish Patra <atish.patra@....com>
> > wrote:
> >
> > In RISC-V, tlb flush happens via SBI which is expensive.
> > If the target cpumask contains a local hartid, some cost
> > can be saved by issuing a local tlb flush as we do that
> > in OpenSBI anyways.
>
> Is there anything other than convention and current usage that
> prevents
> the kernel from natively handling TLB flushes without ever making the
> SBI
> call?
>
> Someone is eventually going to want to run the linux kernel in
> machine mode,
> likely for performance and/or security reasons, and this will require
> flushing TLBs
> natively anyway.
>
The support is already added by Christoph in nommu series.
https://lkml.org/lkml/2019/6/10/935
The idea is to just send IPIs directly in Linux. The same approach is
not good in Supervisor mode until we can get rid of IPIs via SBI all
together. Otherwise, every tlbflush will be even more expensive as it
has to comeback to S mode and then execute sfence.vma.
>
> > Signed-off-by: Atish Patra <atish.patra@....com>
> > ---
> > arch/riscv/include/asm/tlbflush.h | 33 +++++++++++++++++++++++++++-
> > ---
> > 1 file changed, 29 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/tlbflush.h
> > b/arch/riscv/include/asm/tlbflush.h
> > index 687dd19735a7..b32ba4fa5888 100644
> > --- a/arch/riscv/include/asm/tlbflush.h
> > +++ b/arch/riscv/include/asm/tlbflush.h
> > @@ -8,6 +8,7 @@
> > #define _ASM_RISCV_TLBFLUSH_H
> >
> > #include <linux/mm_types.h>
> > +#include <linux/sched.h>
> > #include <asm/smp.h>
> >
> > /*
> > @@ -46,14 +47,38 @@ static inline void remote_sfence_vma(struct
> > cpumask *cmask, unsigned long start,
> > unsigned long size)
> > {
> > struct cpumask hmask;
> > + struct cpumask tmask;
> > + int cpuid = smp_processor_id();
> >
> > cpumask_clear(&hmask);
> > - riscv_cpuid_to_hartid_mask(cmask, &hmask);
> > - sbi_remote_sfence_vma(hmask.bits, start, size);
> > + cpumask_clear(&tmask);
> > +
> > + if (cmask)
> > + cpumask_copy(&tmask, cmask);
> > + else
> > + cpumask_copy(&tmask, cpu_online_mask);
> > +
> > + if (cpumask_test_cpu(cpuid, &tmask)) {
> > + /* Save trap cost by issuing a local tlb flush here */
> > + if ((start == 0 && size == -1) || (size > PAGE_SIZE))
> > + local_flush_tlb_all();
> > + else if (size == PAGE_SIZE)
> > + local_flush_tlb_page(start);
> > + cpumask_clear_cpu(cpuid, &tmask);
> > + } else if (cpumask_empty(&tmask)) {
> > + /* cpumask is empty. So just do a local flush */
> > + local_flush_tlb_all();
> > + return;
> > + }
> > +
> > + if (!cpumask_empty(&tmask)) {
> > + riscv_cpuid_to_hartid_mask(&tmask, &hmask);
> > + sbi_remote_sfence_vma(hmask.bits, start, size);
> > + }
> > }
> >
> > -#define flush_tlb_all() sbi_remote_sfence_vma(NULL, 0, -1)
> > -#define flush_tlb_page(vma, addr) flush_tlb_range(vma, addr, 0)
> > +#define flush_tlb_all() remote_sfence_vma(NULL, 0, -1)
> > +#define flush_tlb_page(vma, addr) flush_tlb_range(vma, addr,
> > (addr) + PAGE_SIZE)
> > #define flush_tlb_range(vma, start, end) \
> > remote_sfence_vma(mm_cpumask((vma)->vm_mm), start, (end) -
> > (start))
> > #define flush_tlb_mm(mm) \
> > --
> > 2.21.0
> >
> >
> > _______________________________________________
> > linux-riscv mailing list
> > linux-riscv@...ts.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-riscv
--
Regards,
Atish
Powered by blists - more mailing lists