[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZD2gsbN2K66oXT69@x1n>
Date: Mon, 17 Apr 2023 15:40:33 -0400
From: Peter Xu <peterx@...hat.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: akpm@...ux-foundation.org, willy@...radead.org, hannes@...xchg.org,
mhocko@...e.com, josef@...icpanda.com, jack@...e.cz,
ldufour@...ux.ibm.com, laurent.dufour@...ibm.com,
michel@...pinasse.org, liam.howlett@...cle.com, jglisse@...gle.com,
vbabka@...e.cz, minchan@...gle.com, dave@...olabs.net,
punit.agrawal@...edance.com, lstoakes@...il.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH v2 1/1] mm: do not increment pgfault stats when page
fault handler retries
On Fri, Apr 14, 2023 at 05:08:18PM -0700, Suren Baghdasaryan wrote:
> If the page fault handler requests a retry, we will count the fault
> multiple times. This is a relatively harmless problem as the retry paths
> are not often requested, and the only user-visible problem is that the
> fault counter will be slightly higher than it should be. Nevertheless,
> userspace only took one fault, and should not see the fact that the
> kernel had to retry the fault multiple times.
> Move page fault accounting into mm_account_fault() and skip incomplete
> faults which will be accounted upon completion.
>
> Fixes: d065bd810b6d ("mm: retry page fault when blocking on disk transfer")
> Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> ---
> mm/memory.c | 45 ++++++++++++++++++++++++++-------------------
> 1 file changed, 26 insertions(+), 19 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 01a23ad48a04..c3b709ceeed7 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5080,24 +5080,30 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
> * updates. However, note that the handling of PERF_COUNT_SW_PAGE_FAULTS should
> * still be in per-arch page fault handlers at the entry of page fault.
> */
> -static inline void mm_account_fault(struct pt_regs *regs,
> +static inline void mm_account_fault(struct mm_struct *mm, struct pt_regs *regs,
> unsigned long address, unsigned int flags,
> vm_fault_t ret)
> {
> bool major;
>
> /*
> - * We don't do accounting for some specific faults:
> - *
> - * - Unsuccessful faults (e.g. when the address wasn't valid). That
> - * includes arch_vma_access_permitted() failing before reaching here.
> - * So this is not a "this many hardware page faults" counter. We
> - * should use the hw profiling for that.
> - *
> - * - Incomplete faults (VM_FAULT_RETRY). They will only be counted
> - * once they're completed.
> + * Do not account for incomplete faults (VM_FAULT_RETRY). They will be
> + * counted upon completion.
> */
> - if (ret & (VM_FAULT_ERROR | VM_FAULT_RETRY))
> + if (ret & VM_FAULT_RETRY)
> + return;
> +
> + /* Register both successful and failed faults in PGFAULT counters. */
> + count_vm_event(PGFAULT);
> + count_memcg_event_mm(mm, PGFAULT);
Is there reason on why vm events accountings need to be explicitly
different from perf events right below on handling ERROR?
I get the point if this is to make sure ERROR accountings untouched for
these two vm events after this patch. IOW probably the only concern right
now is having RETRY counted much more than before (perhaps worse with vma
locking applied).
But since we're on this, I'm wondering whether we should also align the two
events (vm, perf) so they represent in an aligned manner if we'll change it
anyway. Any future reader will be confused on why they account
differently, IMHO, so if we need to differenciate we'd better add a comment
on why.
I'm wildly guessing the error faults are indeed very rare and probably not
matter much at all. I just think the code can be slightly cleaner if
vm/perf accountings match and easier if we treat everything the same. E.g.,
we can also drop the below "goto out"s too. What do you think?
Thanks,
> +
> + /*
> + * Do not account for unsuccessful faults (e.g. when the address wasn't
> + * valid). That includes arch_vma_access_permitted() failing before
> + * reaching here. So this is not a "this many hardware page faults"
> + * counter. We should use the hw profiling for that.
> + */
> + if (ret & VM_FAULT_ERROR)
> return;
>
> /*
> @@ -5180,21 +5186,22 @@ static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma,
> vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
> unsigned int flags, struct pt_regs *regs)
> {
> + /* Copy vma->vm_mm in case mmap_lock is dropped and vma becomes unstable. */
> + struct mm_struct *mm = vma->vm_mm;
> vm_fault_t ret;
>
> __set_current_state(TASK_RUNNING);
>
> - count_vm_event(PGFAULT);
> - count_memcg_event_mm(vma->vm_mm, PGFAULT);
> -
> ret = sanitize_fault_flags(vma, &flags);
> if (ret)
> - return ret;
> + goto out;
>
> if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
> flags & FAULT_FLAG_INSTRUCTION,
> - flags & FAULT_FLAG_REMOTE))
> - return VM_FAULT_SIGSEGV;
> + flags & FAULT_FLAG_REMOTE)) {
> + ret = VM_FAULT_SIGSEGV;
> + goto out;
> + }
>
> /*
> * Enable the memcg OOM handling for faults triggered in user
> @@ -5223,8 +5230,8 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
> if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
> mem_cgroup_oom_synchronize(false);
> }
> -
> - mm_account_fault(regs, address, flags, ret);
> +out:
> + mm_account_fault(mm, regs, address, flags, ret);
>
> return ret;
> }
> --
> 2.40.0.634.g4ca3ef3211-goog
>
>
--
Peter Xu
Powered by blists - more mailing lists