[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <319e4db9-64ae-4bca-92f0-ade85d342ff@google.com>
Date: Thu, 3 Mar 2022 20:36:37 -0800 (PST)
From: Hugh Dickins <hughd@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Oleg Nesterov <oleg@...hat.com>,
Liam Howlett <liam.howlett@...cle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [PATCH mmotm] mempolicy: mbind_range() set_policy() after
vma_merge()
v2.6.34 commit 9d8cebd4bcd7 ("mm: fix mbind vma merge problem")
introduced vma_merge() to mbind_range(); but unlike madvise, mlock and
mprotect, it put a "continue" to next vma where its precedents go to
update flags on current vma before advancing: that left vma with the
wrong setting in the infamous vma_merge() case 8.
v3.10 commit 1444f92c8498 ("mm: merging memory blocks resets mempolicy")
tried to fix that in vma_adjust(), without fully understanding the issue.
v3.11 commit 3964acd0dbec ("mm: mempolicy: fix mbind_range() &&
vma_adjust() interaction") reverted that, and went about the fix in the
right way, but chose to optimize out an unnecessary mpol_dup() with a
prior mpol_equal() test. But on tmpfs, that also pessimized out the
vital call to its ->set_policy(), leaving the new mbind unenforced.
Just delete that optimization now (though it could be made conditional
on vma not having a set_policy). Also remove the "next" variable:
it turned out to be blameless, but also pointless.
Fixes: 3964acd0dbec ("mm: mempolicy: fix mbind_range() && vma_adjust() interaction")
Signed-off-by: Hugh Dickins <hughd@...gle.com>
---
mm/mempolicy.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -786,7 +786,6 @@ static int vma_replace_policy(struct vm_area_struct *vma,
static int mbind_range(struct mm_struct *mm, unsigned long start,
unsigned long end, struct mempolicy *new_pol)
{
- struct vm_area_struct *next;
struct vm_area_struct *prev;
struct vm_area_struct *vma;
int err = 0;
@@ -801,8 +800,7 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
if (start > vma->vm_start)
prev = vma;
- for (; vma && vma->vm_start < end; prev = vma, vma = next) {
- next = vma->vm_next;
+ for (; vma && vma->vm_start < end; prev = vma, vma = vma->vm_next) {
vmstart = max(start, vma->vm_start);
vmend = min(end, vma->vm_end);
@@ -817,10 +815,6 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
anon_vma_name(vma));
if (prev) {
vma = prev;
- next = vma->vm_next;
- if (mpol_equal(vma_policy(vma), new_pol))
- continue;
- /* vma_merge() joined vma && vma->next, case 8 */
goto replace;
}
if (vma->vm_start != vmstart) {
Powered by blists - more mailing lists