[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZhkrY5tkxgAsL1GF@x1n>
Date: Fri, 12 Apr 2024 08:38:59 -0400
From: Peter Xu <peterx@...hat.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: "Liam R. Howlett" <Liam.Howlett@...cle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Lokesh Gidra <lokeshgidra@...gle.com>,
Alistair Popple <apopple@...dia.com>
Subject: Re: [PATCH] mm: Always sanity check anon_vma first for per-vma locks
On Fri, Apr 12, 2024 at 04:14:16AM +0100, Matthew Wilcox wrote:
> On Thu, Apr 11, 2024 at 11:02:32PM +0100, Matthew Wilcox wrote:
> > > How many instructions it takes for a late RETRY for WRITEs to private file
> > > mappings, fallback to mmap_sem?
> >
> > Doesn't matter. That happens _once_ per VMA, and it's dwarfed by the
> > cost of allocating and initialising the COWed page. You're adding
> > instructions to every single page fault. I'm not happy that we had to
> > add extra instructions to the fault path for single-threaded programs,
> > but we at least had the justification that we were improving scalability
> > on large systems. Your excuse is "it makes the code cleaner". And
> > honestly, I don't think it even does that.
>
> Suren, what would you think to this?
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 6e2fe960473d..e495adcbe968 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5821,15 +5821,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> if (!vma_start_read(vma))
> goto inval;
>
> - /*
> - * find_mergeable_anon_vma uses adjacent vmas which are not locked.
> - * This check must happen after vma_start_read(); otherwise, a
> - * concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA
> - * from its anon_vma.
> - */
> - if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
> - goto inval_end_read;
> -
> /* Check since vm_start/vm_end might change before we lock the VMA */
> if (unlikely(address < vma->vm_start || address >= vma->vm_end))
> goto inval_end_read;
>
> That takes a few insns out of the page fault path (good!) at the cost
> of one extra trip around the fault handler for the first fault on an
> anon vma. It makes the file & anon paths more similar to each other
> (good!)
>
> We'd need some data to be sure it's really a win, but less code is
> always good.
You at least need two things:
(1) don't throw away Jann's comment so easily
(2) have a look on whether anon memory has the fallback yet, at all
Maybe someone can already comment in a harsh way on this one, but no, I'm
not going to be like that.
I still don't understand why you don't like so much to not fallback at all
if we could, the flags I checked was all in hot cache I think anyway.
And since I'm also enough on how you comment in your previous replies, I'll
leave the rest comments for others.
--
Peter Xu
Powered by blists - more mailing lists