[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mhng-3e79924e-d965-4156-836d-19cc8fb8cafe@palmer-ri-x1c9a>
Date: Tue, 19 Mar 2024 17:48:08 -0700 (PDT)
From: Palmer Dabbelt <palmer@...belt.com>
To: cyy@...self.name
CC: linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
Paul Walmsley <paul.walmsley@...ive.com>, aou@...s.berkeley.edu, alexghiti@...osinc.com,
Conor Dooley <conor.dooley@...rochip.com>, jszhang@...nel.org, Andrew Waterman <andrew@...ive.com>, cyy@...self.name
Subject: Re: [PATCH] RISC-V: only flush icache when it has VM_EXEC set
On Tue, 09 Jan 2024 10:48:59 PST (-0800), cyy@...self.name wrote:
> As I-Cache flush on current RISC-V needs to send IPIs to every CPU cores
> in the system is very costly, limiting flush_icache_mm to be called only
> when vma->vm_flags has VM_EXEC can help minimize the frequency of these
> operations. It improves performance and reduces disturbances when
> copy_from_user_page is needed such as profiling with perf.
>
> For I-D coherence concerns, it will not fail if such a page adds VM_EXEC
> flags in the future since we have checked it in the __set_pte_at function.
>
> Signed-off-by: Yangyu Chen <cyy@...self.name>
> ---
> arch/riscv/include/asm/cacheflush.h | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
> index 3cb53c4df27c..915f532dc336 100644
> --- a/arch/riscv/include/asm/cacheflush.h
> +++ b/arch/riscv/include/asm/cacheflush.h
> @@ -33,8 +33,11 @@ static inline void flush_dcache_page(struct page *page)
> * so instead we just flush the whole thing.
> */
> #define flush_icache_range(start, end) flush_icache_all()
> -#define flush_icache_user_page(vma, pg, addr, len) \
> - flush_icache_mm(vma->vm_mm, 0)
> +#define flush_icache_user_page(vma, pg, addr, len) \
> +do { \
> + if (vma->vm_flags & VM_EXEC) \
> + flush_icache_mm(vma->vm_mm, 0); \
> +} while (0)
>
> #ifdef CONFIG_64BIT
> #define flush_cache_vmap(start, end) flush_tlb_kernel_range(start, end)
I'm not super worried about the benchmarks, I think we can just
open-loop assume this is faster by avoiding the flushes. I do think we
need a hook into at least tlb_update_vma_flags(), though, to insert the
fence.i when upgrading a mapping to include VM_EXEC.
Powered by blists - more mailing lists