[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220304184927.vkq6ewn6uqtcesma@revolver>
Date: Fri, 4 Mar 2022 18:49:33 +0000
From: Liam Howlett <liam.howlett@...cle.com>
To: Hugh Dickins <hughd@...gle.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH mmotm] mempolicy: mbind_range() set_policy() after
vma_merge()
* Hugh Dickins <hughd@...gle.com> [220303 23:36]:
> v2.6.34 commit 9d8cebd4bcd7 ("mm: fix mbind vma merge problem")
> introduced vma_merge() to mbind_range(); but unlike madvise, mlock and
> mprotect, it put a "continue" to next vma where its precedents go to
> update flags on current vma before advancing: that left vma with the
> wrong setting in the infamous vma_merge() case 8.
>
> v3.10 commit 1444f92c8498 ("mm: merging memory blocks resets mempolicy")
> tried to fix that in vma_adjust(), without fully understanding the issue.
>
> v3.11 commit 3964acd0dbec ("mm: mempolicy: fix mbind_range() &&
> vma_adjust() interaction") reverted that, and went about the fix in the
> right way, but chose to optimize out an unnecessary mpol_dup() with a
> prior mpol_equal() test. But on tmpfs, that also pessimized out the
> vital call to its ->set_policy(), leaving the new mbind unenforced.
>
> Just delete that optimization now (though it could be made conditional
> on vma not having a set_policy). Also remove the "next" variable:
> it turned out to be blameless, but also pointless.
>
> Fixes: 3964acd0dbec ("mm: mempolicy: fix mbind_range() && vma_adjust() interaction")
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
> ---
>
> mm/mempolicy.c | 8 +-------
> 1 file changed, 1 insertion(+), 7 deletions(-)
>
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -786,7 +786,6 @@ static int vma_replace_policy(struct vm_area_struct *vma,
> static int mbind_range(struct mm_struct *mm, unsigned long start,
> unsigned long end, struct mempolicy *new_pol)
> {
> - struct vm_area_struct *next;
> struct vm_area_struct *prev;
> struct vm_area_struct *vma;
> int err = 0;
> @@ -801,8 +800,7 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
> if (start > vma->vm_start)
> prev = vma;
>
> - for (; vma && vma->vm_start < end; prev = vma, vma = next) {
> - next = vma->vm_next;
> + for (; vma && vma->vm_start < end; prev = vma, vma = vma->vm_next) {
> vmstart = max(start, vma->vm_start);
> vmend = min(end, vma->vm_end);
>
> @@ -817,10 +815,6 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
> anon_vma_name(vma));
> if (prev) {
> vma = prev;
> - next = vma->vm_next;
> - if (mpol_equal(vma_policy(vma), new_pol))
> - continue;
> - /* vma_merge() joined vma && vma->next, case 8 */
> goto replace;
> }
> if (vma->vm_start != vmstart) {
Reviewed-by: Liam R. Howlett <Liam.Howlett@...cle.com>
Powered by blists - more mailing lists