lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 25 Oct 2019 01:27:46 +0800
From:   Yang Shi <yang.shi@...ux.alibaba.com>
To:     hughd@...gle.com, kirill.shutemov@...ux.intel.com,
        aarcange@...hat.com, akpm@...ux-foundation.org
Cc:     yang.shi@...ux.alibaba.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: [PATCH] mm: thp: clear PageDoubleMap flag when the last PMD map gone

File THP sets PageDoubleMap flag when the first it gets PTE mapped, but
the flag is never cleared until the THP is freed.  This result in
unbalanced state although it is not a big deal. 

Clear the flag when the last compound_mapcount is gone.  It should be
cleared when all the PTE maps are gone (become PMD mapped only) as well,
but this needs check all subpage's _mapcount every time any subpage's
rmap is removed, the overhead may be not worth.  The anonymous THP also
just clears PageDoubleMap flag when the last PMD map is gone.

Cc: Hugh Dickins <hughd@...gle.com>
Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Cc: Andrea Arcangeli <aarcange@...hat.com>
Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
---
Hugh thought it is unnecessary to fix it completely due to the overhead
(https://lkml.org/lkml/2019/10/22/1011), but it sounds simple to achieve
the similar balance as anonymous THP.

 mm/rmap.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/rmap.c b/mm/rmap.c
index 0c7b2a9..d17cbf3 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1236,6 +1236,9 @@ static void page_remove_file_rmap(struct page *page, bool compound)
 			__dec_node_page_state(page, NR_SHMEM_PMDMAPPED);
 		else
 			__dec_node_page_state(page, NR_FILE_PMDMAPPED);
+
+		/* The last PMD map is gone */
+		ClearPageDoubleMap(compound_head(page));
 	} else {
 		if (!atomic_add_negative(-1, &page->_mapcount))
 			goto out;
-- 
1.8.3.1

Powered by blists - more mailing lists