[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aba31303-e992-9ad7-995f-d159f79a55f7@huawei.com>
Date: Wed, 21 Jun 2023 14:15:26 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: Jisheng Zhang <jszhang@...nel.org>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>
CC: <linux-riscv@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
Suren Baghdasaryan <surenb@...gle.com>
Subject: Re: [PATCH] riscv: mm: try VMA lock-based page fault handling first
On 2023/5/24 0:59, Jisheng Zhang wrote:
> Attempt VMA lock-based page fault handling first, and fall back to the
> existing mmap_lock-based handling if that fails.
>
> A simple running the ebizzy benchmark on Lichee Pi 4A shows that
> PER_VMA_LOCK can improve the ebizzy benchmark by about 32.68%. In
> theory, the more CPUs, the bigger improvement, but I don't have any
> HW platform which has more than 4 CPUs.
>
> This is the riscv variant of "x86/mm: try VMA lock-based page fault
> handling first".
>
> Signed-off-by: Jisheng Zhang <jszhang@...nel.org>
> ---
> Any performance numbers are welcome! Especially the numbers on HW
> platforms with 8 or more CPUs.
>
> arch/riscv/Kconfig | 1 +
> arch/riscv/mm/fault.c | 33 +++++++++++++++++++++++++++++++++
> 2 files changed, 34 insertions(+)
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 62e84fee2cfd..b958f67f9a12 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -42,6 +42,7 @@ config RISCV
> select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
> select ARCH_SUPPORTS_HUGETLBFS if MMU
> select ARCH_SUPPORTS_PAGE_TABLE_CHECK if MMU
> + select ARCH_SUPPORTS_PER_VMA_LOCK if MMU
no need if mmu, see PER_VMA_LOCK
config PER_VMA_LOCK
bool "allow VMA lock-based page fault"
def_bool y
depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
Reviewed-by: Kefeng Wang <wangkefeng.wang@...wei.com>
> select ARCH_USE_MEMTEST
> select ARCH_USE_QUEUED_RWLOCKS
> select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
> diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> index 8685f85a7474..eccdddf26f4b 100644
> --- a/arch/riscv/mm/fault.c
> +++ b/arch/riscv/mm/fault.c
> @@ -286,6 +286,36 @@ void handle_page_fault(struct pt_regs *regs)
> flags |= FAULT_FLAG_WRITE;
> else if (cause == EXC_INST_PAGE_FAULT)
> flags |= FAULT_FLAG_INSTRUCTION;
> +#ifdef CONFIG_PER_VMA_LOCK
> + if (!(flags & FAULT_FLAG_USER))
> + goto lock_mmap;
> +
> + vma = lock_vma_under_rcu(mm, addr);
> + if (!vma)
> + goto lock_mmap;
> +
> + if (unlikely(access_error(cause, vma))) {
> + vma_end_read(vma);
> + goto lock_mmap;
> + }
> +
> + fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs);
> + vma_end_read(vma);
> +
> + if (!(fault & VM_FAULT_RETRY)) {
> + count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
> + goto done;
> + }
> + count_vm_vma_lock_event(VMA_LOCK_RETRY);
> +
> + if (fault_signal_pending(fault, regs)) {
> + if (!user_mode(regs))
> + no_context(regs, addr);
> + return;
> + }
> +lock_mmap:
> +#endif /* CONFIG_PER_VMA_LOCK */
> +
> retry:
> mmap_read_lock(mm);
> vma = find_vma(mm, addr);
> @@ -355,6 +385,9 @@ void handle_page_fault(struct pt_regs *regs)
>
> mmap_read_unlock(mm);
>
> +#ifdef CONFIG_PER_VMA_LOCK
> +done:
> +#endif
> if (unlikely(fault & VM_FAULT_ERROR)) {
> tsk->thread.bad_cause = cause;
> mm_fault_error(regs, addr, fault);
Powered by blists - more mailing lists