[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0b136cc3-c85e-1183-5ddc-ab99fd58c012@arm.com>
Date: Thu, 25 May 2023 12:30:04 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Jisheng Zhang <jszhang@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: mm: pass original fault address to
handle_mm_fault() in PER_VMA_LOCK block
On 5/24/23 18:42, Jisheng Zhang wrote:
> When reading the arm64's PER_VMA_LOCK support code, I found a bit
> difference between arm64 and other arch when calling handle_mm_fault()
> during VMA lock-based page fault handling: the fault address is masked
> before passing to handle_mm_fault(). This is also different from the
> usage in mmap_lock-based handling. I think we need to pass the
> original fault address to handle_mm_fault() as we did in
> commit 84c5e23edecd ("arm64: mm: Pass original fault address to
> handle_mm_fault()").
>
> If we go through the code path further, we can find that the "masked"
> fault address can cause mismatched fault address between perf sw
> major/minor page fault sw event and perf page fault sw event:
>
> do_page_fault
> perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, ..., addr) // orig addr
> handle_mm_fault
> mm_account_fault
> perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, ...) // masked addr
>
> Fixes: cd7f176aea5f ("arm64/mm: try VMA lock-based page fault handling first")
> Signed-off-by: Jisheng Zhang <jszhang@...nel.org>
LGTM
Reviewed-by: Anshuman Khandual <anshuman.khandual@....com>
> ---
> arch/arm64/mm/fault.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index cb21ccd7940d..6045a5117ac1 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -600,8 +600,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
> vma_end_read(vma);
> goto lock_mmap;
> }
> - fault = handle_mm_fault(vma, addr & PAGE_MASK,
> - mm_flags | FAULT_FLAG_VMA_LOCK, regs);
> + fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
> vma_end_read(vma);
>
> if (!(fault & VM_FAULT_RETRY)) {
Powered by blists - more mailing lists