lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231003185149.brbzyu2ivn25tkeu@revolver>
Date:   Tue, 3 Oct 2023 14:51:49 -0400
From:   "Liam R. Howlett" <Liam.Howlett@...cle.com>
To:     Suren Baghdasaryan <surenb@...gle.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Jann Horn <jannh@...gle.com>,
        Lorenzo Stoakes <lstoakes@...il.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Matthew Wilcox <willy@...radead.org>, stable@...r.kernel.org
Subject: Re: [PATCH v3 2/3] mmap: Fix error paths with dup_anon_vma()

* Suren Baghdasaryan <surenb@...gle.com> [231003 12:21]:
> On Fri, Sep 29, 2023 at 11:30 AM Liam R. Howlett
> <Liam.Howlett@...cle.com> wrote:
> >
> > When the calling function fails after the dup_anon_vma(), the
> > duplication of the anon_vma is not being undone.  Add the necessary
> > unlink_anon_vma() call to the error paths that are missing them.
> >
> > This issue showed up during inspection of the error path in vma_merge()
> > for an unrelated vma iterator issue.
> >
> > Users may experience increased memory usage, which may be problematic as
> > the failure would likely be caused by a low memory situation.
> >
> > Fixes: d4af56c5c7c6 ("mm: start tracking VMAs with maple tree")
> > Cc: stable@...r.kernel.org
> > Cc: Jann Horn <jannh@...gle.com>
> > Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com>
> > ---
> >  mm/mmap.c | 30 ++++++++++++++++++++++--------
> >  1 file changed, 22 insertions(+), 8 deletions(-)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index acb7dea49e23..f9f0a5fe4db4 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -583,11 +583,12 @@ static inline void vma_complete(struct vma_prepare *vp,
> >   * dup_anon_vma() - Helper function to duplicate anon_vma
> >   * @dst: The destination VMA
> >   * @src: The source VMA
> > + * @dup: Pointer to the destination VMA when successful.
> >   *
> >   * Returns: 0 on success.
> >   */
> >  static inline int dup_anon_vma(struct vm_area_struct *dst,
> > -                              struct vm_area_struct *src)
> > +               struct vm_area_struct *src, struct vm_area_struct **dup)
> >  {
> >         /*
> >          * Easily overlooked: when mprotect shifts the boundary, make sure the
> > @@ -595,9 +596,15 @@ static inline int dup_anon_vma(struct vm_area_struct *dst,
> >          * anon pages imported.
> >          */
> >         if (src->anon_vma && !dst->anon_vma) {
> > +               int ret;
> > +
> >                 vma_assert_write_locked(dst);
> >                 dst->anon_vma = src->anon_vma;
> > -               return anon_vma_clone(dst, src);
> > +               ret = anon_vma_clone(dst, src);
> > +               if (ret)
> > +                       return ret;
> > +
> > +               *dup = dst;
> >         }
> >
> >         return 0;
> > @@ -624,6 +631,7 @@ int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >                unsigned long start, unsigned long end, pgoff_t pgoff,
> >                struct vm_area_struct *next)
> >  {
> > +       struct vm_area_struct *anon_dup = NULL;
> >         bool remove_next = false;
> >         struct vma_prepare vp;
> >
> > @@ -633,7 +641,7 @@ int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >
> >                 remove_next = true;
> >                 vma_start_write(next);
> > -               ret = dup_anon_vma(vma, next);
> > +               ret = dup_anon_vma(vma, next, &anon_dup);
> >                 if (ret)
> >                         return ret;
> 
> Shouldn't the above be changed to a "goto nomem" instead of "return ret" ?
> 
> 
> >         }
> > @@ -661,6 +669,8 @@ int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >         return 0;
> >
> >  nomem:
> > +       if (anon_dup)
> > +               unlink_anon_vmas(anon_dup);
> >         return -ENOMEM;
> >  }
> >
> > @@ -860,6 +870,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
> >  {
> >         struct vm_area_struct *curr, *next, *res;
> >         struct vm_area_struct *vma, *adjust, *remove, *remove2;
> > +       struct vm_area_struct *anon_dup = NULL;
> >         struct vma_prepare vp;
> >         pgoff_t vma_pgoff;
> >         int err = 0;
> > @@ -927,18 +938,18 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
> >                 vma_start_write(next);
> >                 remove = next;                          /* case 1 */
> >                 vma_end = next->vm_end;
> > -               err = dup_anon_vma(prev, next);
> > +               err = dup_anon_vma(prev, next, &anon_dup);
> >                 if (curr) {                             /* case 6 */
> >                         vma_start_write(curr);
> >                         remove = curr;
> >                         remove2 = next;
> >                         if (!next->anon_vma)
> > -                               err = dup_anon_vma(prev, curr);
> > +                               err = dup_anon_vma(prev, curr, &anon_dup);
> >                 }
> >         } else if (merge_prev) {                        /* case 2 */
> >                 if (curr) {
> >                         vma_start_write(curr);
> > -                       err = dup_anon_vma(prev, curr);
> > +                       err = dup_anon_vma(prev, curr, &anon_dup);
> >                         if (end == curr->vm_end) {      /* case 7 */
> >                                 remove = curr;
> >                         } else {                        /* case 5 */
> > @@ -954,7 +965,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
> >                         vma_end = addr;
> >                         adjust = next;
> >                         adj_start = -(prev->vm_end - addr);
> > -                       err = dup_anon_vma(next, prev);
> > +                       err = dup_anon_vma(next, prev, &anon_dup);
> >                 } else {
> >                         /*
> >                          * Note that cases 3 and 8 are the ONLY ones where prev
> > @@ -968,7 +979,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
> >                                 vma_pgoff = curr->vm_pgoff;
> >                                 vma_start_write(curr);
> >                                 remove = curr;
> > -                               err = dup_anon_vma(next, curr);
> > +                               err = dup_anon_vma(next, curr, &anon_dup);
> >                         }
> >                 }
> >         }
> > @@ -1018,6 +1029,9 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
> >         return res;
> >
> >  prealloc_fail:
> > +       if (anon_dup)
> > +               unlink_anon_vmas(anon_dup);
> 
> Maybe a stupid question, but why can't we do this unlinking inside
> dup_anon_vma() itself when anon_vma_clone() fails? That would
> eliminate the need for the out parameter in that function. I suspect
> that there is a reason for that which I'm missing.

It's too late.  This is to undo the link when the preallocation for the
maple tree fails.  So we had memory to dup the anon vma, but not to put
it in the tree.

> 
> > +
> >  anon_vma_fail:
> >         vma_iter_set(vmi, addr);
> >         vma_iter_load(vmi);
> > --
> > 2.40.1
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ