lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 21 Jun 2016 22:05:56 +0800
From:	zhongjiang <zhongjiang@...wei.com>
To:	<mhocko@...nel.org>, <akpm@...ux-foundation.org>
CC:	<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>
Subject: [PATCH] mm/huge_memory: fix the memory leak due to the race

From: zhong jiang <zhongjiang@...wei.com>

with great pressure, I run some test cases. As a result, I found
that the THP is not freed, it is detected by check_mm().

BUG: Bad rss-counter state mm:ffff8827edb70000 idx:1 val:512

Consider the following race :

	CPU0                               CPU1
  __handle_mm_fault()
        wp_huge_pmd()
   	    do_huge_pmd_wp_page()
		pmdp_huge_clear_flush_notify()
                (pmd_none = true)
					exit_mmap()
					   unmap_vmas()
					     zap_pmd_range()
						pmd_none_or_trans_huge_or_clear_bad()
						   (result in memory leak)
                set_pmd_at()

because of CPU0 have allocated huge page before pmdp_huge_clear_notify,
and it make the pmd entry to be null. Therefore, The memory leak can occur.

The patch fix the scenario that the pmd entry can lead to be null.

Signed-off-by: zhong jiang <zhongjiang@...wei.com>
---
 mm/huge_memory.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e10a4fe..ef04b94 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1340,11 +1340,11 @@ alloc:
 		pmd_t entry;
 		entry = mk_huge_pmd(new_page, vma->vm_page_prot);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
-		pmdp_huge_clear_flush_notify(vma, haddr, pmd);
+		pmdp_invalidate(vma, haddr, pmd);	
 		page_add_new_anon_rmap(new_page, vma, haddr, true);
 		mem_cgroup_commit_charge(new_page, memcg, false, true);
 		lru_cache_add_active_or_unevictable(new_page, vma);
-		set_pmd_at(mm, haddr, pmd, entry);
+		pmd_populate(mm, pmd, entry);
 		update_mmu_cache_pmd(vma, address, pmd);
 		if (!page) {
 			add_mm_counter(mm, MM_ANONPAGES, HPAGE_PMD_NR);
-- 
1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ