lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1495216887-3175-2-git-send-email-jglisse@redhat.com>
Date:   Fri, 19 May 2017 14:01:27 -0400
From:   Jérôme Glisse <jglisse@...hat.com>
To:     <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
Cc:     Jérôme Glisse <jglisse@...hat.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Ingo Molnar <mingo@...nel.org>, Michal Hocko <mhocko@...e.com>,
        Mel Gorman <mgorman@...e.de>
Subject: [PATCH] x86/mm: synchronize pgd in vmemmap_free()

When we free kernel virtual map we should synchronize p4d/pud for
all the pgds to avoid any stall entry in non canonical pgd.

Signed-off-by: Jérôme Glisse <jglisse@...hat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Mel Gorman <mgorman@...e.de>
---
 arch/x86/mm/init_64.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index ff95fe8..df753f8 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -108,8 +108,6 @@ void sync_global_pgds(unsigned long start, unsigned long end)
 		BUILD_BUG_ON(pgd_none(*pgd_ref));
 		p4d_ref = p4d_offset(pgd_ref, address);
 
-		if (p4d_none(*p4d_ref))
-			continue;
 
 		spin_lock(&pgd_lock);
 		list_for_each_entry(page, &pgd_list, lru) {
@@ -123,12 +121,16 @@ void sync_global_pgds(unsigned long start, unsigned long end)
 			pgt_lock = &pgd_page_get_mm(page)->page_table_lock;
 			spin_lock(pgt_lock);
 
-			if (!p4d_none(*p4d_ref) && !p4d_none(*p4d))
-				BUG_ON(p4d_page_vaddr(*p4d)
-				       != p4d_page_vaddr(*p4d_ref));
-
-			if (p4d_none(*p4d))
+			if (p4d_none(*p4d_ref)) {
 				set_p4d(p4d, *p4d_ref);
+			} else {
+				if (!p4d_none(*p4d_ref) && !p4d_none(*p4d))
+					BUG_ON(p4d_page_vaddr(*p4d)
+					       != p4d_page_vaddr(*p4d_ref));
+
+				if (p4d_none(*p4d))
+					set_p4d(p4d, *p4d_ref);
+			}
 
 			spin_unlock(pgt_lock);
 		}
@@ -1024,6 +1026,7 @@ remove_pagetable(unsigned long start, unsigned long end, bool direct)
 void __ref vmemmap_free(unsigned long start, unsigned long end)
 {
 	remove_pagetable(start, end, false);
+	sync_global_pgds(start, end - 1);
 }
 
 #ifdef CONFIG_MEMORY_HOTREMOVE
-- 
2.4.11

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ