[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1c8488e1-1776-f21e-bafd-3892f0894392@suse.cz>
Date: Fri, 29 Sep 2023 11:52:18 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: "Liam R. Howlett" <Liam.Howlett@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: maple-tree@...ts.infradead.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Jann Horn <jannh@...gle.com>,
Lorenzo Stoakes <lstoakes@...il.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Matthew Wilcox <willy@...radead.org>, stable@...r.kernel.org
Subject: Re: [PATCH 1/3] mmap: Fix vma_iterator in error path of vma_merge()
On 9/27/23 18:07, Liam R. Howlett wrote:
> When merging of the previous VMA fails after the vma iterator has been
> moved to the previous entry, the vma iterator must be advanced to ensure
> the caller takes the correct action on the next vma iterator event. Fix
> this by adding a vma_next() call to the error path.
>
> Users may experience higher CPU usage, most likely in very low memory
> situations.
Maybe we could say explicitly that before this fix, vma_merge will be called
twice on the same vma, which to the best of our knowledge does not cause
anything worse than some wasted cycles because vma == prev, but it's fragile?
> Link: https://lore.kernel.org/linux-mm/CAG48ez12VN1JAOtTNMY+Y2YnsU45yL5giS-Qn=ejtiHpgJAbdQ@mail.gmail.com/
> Closes: https://lore.kernel.org/linux-mm/CAG48ez12VN1JAOtTNMY+Y2YnsU45yL5giS-Qn=ejtiHpgJAbdQ@mail.gmail.com/
> Fixes: 18b098af2890 ("vma_merge: set vma iterator to correct position.")
> Cc: stable@...r.kernel.org
> Cc: Jann Horn <jannh@...gle.com>
> Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com>
> ---
> mm/mmap.c | 12 +++++++++---
> 1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index b56a7f0c9f85..b5bc4ca9bdc4 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -968,14 +968,14 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
> vma_pgoff = curr->vm_pgoff;
> vma_start_write(curr);
> remove = curr;
> - err = dup_anon_vma(next, curr);
> + err = dup_anon_vma(next, curr, &anon_dup);
> }
> }
> }
>
> /* Error in anon_vma clone. */
> if (err)
> - return NULL;
> + goto anon_vma_fail;
>
> if (vma_start < vma->vm_start || vma_end > vma->vm_end)
> vma_expanded = true;
The vma_iter_config() actions done in this part are something we don't need
to undo?
> @@ -988,7 +988,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
> }
>
> if (vma_iter_prealloc(vmi, vma))
> - return NULL;
> + goto prealloc_fail;
> init_multi_vma_prep(&vp, vma, adjust, remove, remove2);
> VM_WARN_ON(vp.anon_vma && adjust && adjust->anon_vma &&
> @@ -1016,6 +1016,12 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
> vma_complete(&vp, vmi, mm);
> khugepaged_enter_vma(res, vm_flags);
> return res;
> +
> +prealloc_fail:
> +anon_vma_fail:
> + if (merge_prev)
> + vma_next(vmi);
> + return NULL;
> }
>
> /*
Powered by blists - more mailing lists