[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <84792468-f512-e48f-378c-e34c3641e97@google.com>
Date: Wed, 2 Mar 2022 17:43:34 -0800 (PST)
From: Hugh Dickins <hughd@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Ralph Campbell <rcampbell@...dia.com>,
Yang Shi <shy828301@...il.com>, Zi Yan <ziy@...dia.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [PATCH mmotm] mm/thp: refix __split_huge_pmd_locked() for migration
PMD
Migration entries do not contribute to a page's reference count: move
__split_huge_pmd_locked()'s page_ref_add() into pmd_migration's else
block (along with the page_count() check - a page is quite likely to
to have reference count frozen to 0 when a migration entry is found).
This will fix a very rare anonymous memory leak, after a split_huge_pmd()
raced with an anon split_huge_page() or an anon THP migrate_pages(): since
the wrongly raised refcount stopped the page (perhaps small, perhaps huge,
depending on when the race hit) from ever being freed. At first I thought
there were worse risks, from prematurely unfreezing a frozen page: but now
think that would only affect page cache pages, which do not come this way
(except for anonymous pages in swap cache, perhaps).
Fixes: ec0abae6dcdf ("mm/thp: fix __split_huge_pmd_locked() for migration PMD")
Signed-off-by: Hugh Dickins <hughd@...gle.com>
---
That's an unfair "Fixes": it did not introduce the problem, but it
missed this aspect of the problem; and will be a good guide to where this
refix should go if stable backports are asked for.
mm/huge_memory.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2039,9 +2039,9 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
young = pmd_young(old_pmd);
soft_dirty = pmd_soft_dirty(old_pmd);
uffd_wp = pmd_uffd_wp(old_pmd);
+ VM_BUG_ON_PAGE(!page_count(page), page);
+ page_ref_add(page, HPAGE_PMD_NR - 1);
}
- VM_BUG_ON_PAGE(!page_count(page), page);
- page_ref_add(page, HPAGE_PMD_NR - 1);
/*
* Withdraw the table only after we mark the pmd entry invalid.
Powered by blists - more mailing lists