[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210113142022.rbxbb77saxioednq@revolver>
Date: Wed, 13 Jan 2021 09:20:33 -0500
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: Randy Dunlap <rdunlap@...radead.org>
Cc: maple-tree@...ts.infradead.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Andrew Morton <akpm@...gle.com>,
Song Liu <songliubraving@...com>,
Davidlohr Bueso <dave@...olabs.net>,
"Paul E . McKenney" <paulmck@...nel.org>,
Rik van Riel <riel@...riel.com>,
Peter Zijlstra <peterz@...radead.org>,
Matthew Wilcox <willy@...radead.org>,
Jerome Glisse <jglisse@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH v2 14/70] mm/mmap: Change do_brk_flags() to expand
existing VMA and add do_brk_munmap()
* Randy Dunlap <rdunlap@...radead.org> [210112 16:23]:
> Hi--
>
> On 1/12/21 8:11 AM, Liam R. Howlett wrote:
> > Avoid allocating a new VMA when it is not necessary. Expand or contract
> > the existing VMA instead. This avoids unnecessary tree manipulations
> > and allocations.
> >
> > Once the VMA is known, use it directly when populating to avoid
> > unnecessary lookup work.
> >
> > Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com>
> > ---
> > mm/mmap.c | 299 +++++++++++++++++++++++++++++++++++++++++++-----------
> > 1 file changed, 237 insertions(+), 62 deletions(-)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index a2b32202191d6..f500d5e490f1c 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
>
>
>
> > @@ -2022,8 +2068,7 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
> > EXPORT_SYMBOL(get_unmapped_area);
> >
> > /**
> > - * find_vma() - Find the VMA for a given address, or the next vma. May return
> > - * NULL in the case of no vma at addr or above
> > + * find_vma() - Find the VMA for a given address, or the next vma.
> > * @mm The mm_struct to check
>
> * @mm: ...
Ack
>
> > * @addr: The address
> > *
> > @@ -2777,16 +2825,102 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
> > }
> >
> > /*
> > - * this is really a simplified "do_mmap". it only handles
> > - * anonymous maps. eventually we may be able to do some
> > - * brk-specific accounting here.
> > + * bkr_munmap() - Unmap a parital vma.
> > + * @mas: The maple tree state.
> > + * @vma: The vma to be modified
> > + * @newbrk: the start of the address to unmap
> > + * @oldbrk: The end of the address to unmap
>
> missing:
> * @uf: ...
Thanks, yes. I will add the user fault.
>
> > + *
> > + * Returns: 0 on success.
> > + * unmaps a partial VMA mapping. Does not handle alignment, downgrades lock if
> > + * possible.
> > */
> > -static int do_brk_flags(unsigned long addr, unsigned long len,
> > - unsigned long flags, struct list_head *uf)
> > +static int do_brk_munmap(struct ma_state *mas, struct vm_area_struct *vma,
> > + unsigned long newbrk, unsigned long oldbrk,
> > + struct list_head *uf)
> > +{
> > + struct mm_struct *mm = vma->vm_mm;
> > + struct vm_area_struct unmap;
> > + unsigned long unmap_pages;
> > + int ret = 1;
> > +
> > + arch_unmap(mm, newbrk, oldbrk);
> > +
> > + if (likely(vma->vm_start >= newbrk)) { // remove entire mapping(s)
> > + mas_set(mas, newbrk);
> > + if (vma->vm_start != newbrk)
> > + mas_reset(mas); // cause a re-walk for the first overlap.
> > + ret = do_mas_munmap(mas, mm, newbrk, oldbrk-newbrk, uf, true);
> > + goto munmap_full_vma;
> > + }
> > +
> > + vma_init(&unmap, mm);
> > + unmap.vm_start = newbrk;
> > + unmap.vm_end = oldbrk;
> > + ret = userfaultfd_unmap_prep(&unmap, newbrk, oldbrk, uf);
> > + if (ret)
> > + return ret;
> > + ret = 1;
> > +
> > + // Change the oldbrk of vma to the newbrk of the munmap area
> > + vma_adjust_trans_huge(vma, vma->vm_start, newbrk, 0);
> > + if (vma->anon_vma) {
> > + anon_vma_lock_write(vma->anon_vma);
> > + anon_vma_interval_tree_pre_update_vma(vma);
> > + }
> > +
> > + vma->vm_end = newbrk;
> > + if (vma_mas_remove(&unmap, mas))
> > + goto mas_store_fail;
> > +
> > + vmacache_invalidate(vma->vm_mm);
> > + if (vma->anon_vma) {
> > + anon_vma_interval_tree_post_update_vma(vma);
> > + anon_vma_unlock_write(vma->anon_vma);
> > + }
> > +
> > + unmap_pages = vma_pages(&unmap);
> > + if (unmap.vm_flags & VM_LOCKED) {
> > + mm->locked_vm -= unmap_pages;
> > + munlock_vma_pages_range(&unmap, newbrk, oldbrk);
> > + }
> > +
> > + mmap_write_downgrade(mm);
> > + unmap_region(mm, &unmap, vma, newbrk, oldbrk);
> > + /* Statistics */
> > + vm_stat_account(mm, unmap.vm_flags, -unmap_pages);
> > + if (unmap.vm_flags & VM_ACCOUNT)
> > + vm_unacct_memory(unmap_pages);
> > +
> > +munmap_full_vma:
> > + validate_mm_mt(mm);
> > + return ret;
> > +
> > +mas_store_fail:
> > + vma->vm_end = oldbrk;
> > + anon_vma_interval_tree_post_update_vma(vma);
> > + anon_vma_unlock_write(vma->anon_vma);
> > + return -ENOMEM;
> > +}
> > +
> > +/*
> > + * do_brk_flags() - Increase the brk vma if the flags match.
> > + * @mas: The maple tree state.
> > + * @addr: The start address
> > + * @len: The length of the increase
> > + * @vma: The vma,
>
> s/@.../@...vma/ ??
yes, sorry. I tried to clarify the name in a later revision.
>
> > + * @flags: The VMA Flags
> > + *
> > + * Extend the brk VMA from addr to addr + len. If the VMA is NULL or the flags
> > + * do not match then create a new anonymous VMA. Eventually we may be able to
> > + * do some brk-specific accounting here.
> > + */
> > +static int do_brk_flags(struct ma_state *mas, struct vm_area_struct **brkvma,
> > + unsigned long addr, unsigned long len,
> > + unsigned long flags)
> > {
> > struct mm_struct *mm = current->mm;
> > - struct vm_area_struct *vma, *prev;
> > - pgoff_t pgoff = addr >> PAGE_SHIFT;
> > + struct vm_area_struct *prev = NULL, *vma;
> > int error;
> > unsigned long mapped_addr;
> > validate_mm_mt(mm);
>
>
>
>
Thank you for looking at this. I will fix all the issues you have
pointed out across the three emails.
Thanks,
Liam
Powered by blists - more mailing lists