[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpH+O0NYtTrGKSY6FjBOcWpyKXB+_4rsSRjcewSXUWVfCQ@mail.gmail.com>
Date: Fri, 26 Apr 2024 08:07:45 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Peter Xu <peterx@...hat.com>, "Liam R. Howlett" <Liam.Howlett@...cle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>, Lokesh Gidra <lokeshgidra@...gle.com>,
Alistair Popple <apopple@...dia.com>
Subject: Re: [PATCH] mm: Always sanity check anon_vma first for per-vma locks
On Fri, Apr 26, 2024 at 7:00 AM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Fri, Apr 12, 2024 at 04:14:16AM +0100, Matthew Wilcox wrote:
> > Suren, what would you think to this?
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 6e2fe960473d..e495adcbe968 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -5821,15 +5821,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> > if (!vma_start_read(vma))
> > goto inval;
> >
> > - /*
> > - * find_mergeable_anon_vma uses adjacent vmas which are not locked.
> > - * This check must happen after vma_start_read(); otherwise, a
> > - * concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA
> > - * from its anon_vma.
> > - */
> > - if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
> > - goto inval_end_read;
> > -
> > /* Check since vm_start/vm_end might change before we lock the VMA */
> > if (unlikely(address < vma->vm_start || address >= vma->vm_end))
> > goto inval_end_read;
> >
> > That takes a few insns out of the page fault path (good!) at the cost
> > of one extra trip around the fault handler for the first fault on an
> > anon vma. It makes the file & anon paths more similar to each other
> > (good!)
> >
> > We'd need some data to be sure it's really a win, but less code is
> > always good.
>
> Intel's 0day got back to me with data and it's ridiculously good.
> Headline figure: over 3x throughput improvement with vm-scalability
> https://lore.kernel.org/all/202404261055.c5e24608-oliver.sang@intel.com/
>
> I can't see why it's that good. It shouldn't be that good. I'm
> seeing big numbers here:
>
> 4366 ą 2% +565.6% 29061 perf-stat.overall.cycles-between-cache-misses
>
> and the code being deleted is only checking vma->vm_ops and
> vma->anon_vma. Surely that cache line is referenced so frequently
> during pagefault that deleting a reference here will make no difference
> at all?
That indeed looks overly good. Sorry, I didn't have a chance to run
the benchmarks on my side yet because of the ongoing Android bootcamp
this week.
>
> We've clearly got an inlining change. viz:
>
> 72.57 -72.6 0.00 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
> 73.28 -72.6 0.70 perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
> 72.55 -72.5 0.00 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
> 69.93 -69.9 0.00 perf-profile.calltrace.cycles-pp.lock_mm_and_find_vma.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
> 69.12 -69.1 0.00 perf-profile.calltrace.cycles-pp.down_read_killable.lock_mm_and_find_vma.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
> 68.78 -68.8 0.00 perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.down_read_killable.lock_mm_and_find_vma.do_user_addr_fault.exc_page_fault
> 65.78 -65.8 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.rwsem_down_read_slowpath.down_read_killable.lock_mm_and_find_vma.do_user_addr_fault
> 65.43 -65.4 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.rwsem_down_read_slowpath.down_read_killable.lock_mm_and_find_vma
>
> 11.22 +86.5 97.68 perf-profile.calltrace.cycles-pp.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 11.14 +86.5 97.66 perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
> 3.17 ą 2% +94.0 97.12 perf-profile.calltracecycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff
> 3.45 ą 2% +94.1 97.59 perf-profile.calltracecycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff
> 0.00 +98.2 98.15 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 0.00 +98.2 98.16 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
>
> so maybe the compiler has been able to eliminate some loads from
> contended cachelines?
>
> 703147 -87.6% 87147 ą 2% perf-stat.ps.context-switches
> 663.67 ą 5% +7551.9% 50783 vm-scalability.time.involuntary_context_switches
> 1.105e+08 -86.7% 14697764 ą 2% vm-scalability.time.voluntary_context_switches
>
> indicates to me that we're taking the mmap rwsem far less often (those
> would be accounted as voluntary context switches).
>
> So maybe the cache miss reduction is a consequence of just running for
> longer before being preempted.
>
> I still don't understand why we have to take the mmap_sem less often.
> Is there perhaps a VMA for which we have a NULL vm_ops, but don't set
> an anon_vma on a page fault?
I think the only path in either do_anonymous_page() or
do_huge_pmd_anonymous_page() that skips calling anon_vma_prepare() is
the "Use the zero-page for reads" here:
https://elixir.bootlin.com/linux/latest/source/mm/memory.c#L4265. I
didn't look into this particular benchmark yet but will try it out
once I have some time to benchmark your change.
>
Powered by blists - more mailing lists