lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LNX.2.00.1211131826090.29612@eggly.anvils>
Date:	Tue, 13 Nov 2012 18:27:53 -0800 (PST)
From:	Hugh Dickins <hughd@...gle.com>
To:	Ingo Molnar <mingo@...nel.org>
cc:	Johannes Weiner <hannes@...xchg.org>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, Paul Turner <pjt@...gle.com>,
	Lee Schermerhorn <Lee.Schermerhorn@...com>,
	Christoph Lameter <cl@...ux.com>,
	Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Thomas Gleixner <tglx@...utronix.de>,
	Zhouping Liu <zliu@...hat.com>
Subject: [PATCH 1/2] sched, numa, mm: Add memcg support to
 do_huge_pmd_numa_page()

From: Johannes Weiner <hannes@...xchg.org>

Add mem_cgroup_prepare_migration() and mem_cgroup_end_migration() calls
into do_huge_pmd_numa_page(), and fix mem_cgroup_prepare_migration() to
account for a Transparent Huge Page correctly without bugging.

Tested-by: Zhouping Liu <zliu@...hat.com>
Signed-off-by: Johannes Weiner <hannes@...xchg.org>
Signed-off-by: Hugh Dickins <hughd@...gle.com>
---

 mm/huge_memory.c |   16 ++++++++++++++++
 mm/memcontrol.c  |    7 +++++--
 2 files changed, 21 insertions(+), 2 deletions(-)

--- mmotm/mm/huge_memory.c	2012-11-09 09:43:46.892046342 -0800
+++ linux/mm/huge_memory.c	2012-11-13 14:51:04.000321370 -0800
@@ -750,6 +750,7 @@ void do_huge_pmd_numa_page(struct mm_str
 			   unsigned int flags, pmd_t entry)
 {
 	unsigned long haddr = address & HPAGE_PMD_MASK;
+	struct mem_cgroup *memcg = NULL;
 	struct page *new_page = NULL;
 	struct page *page = NULL;
 	int node, lru;
@@ -840,6 +841,14 @@ migrate:
 
 		return;
 	}
+	/*
+	 * Traditional migration needs to prepare the memcg charge
+	 * transaction early to prevent the old page from being
+	 * uncharged when installing migration entries.  Here we can
+	 * save the potential rollback and start the charge transfer
+	 * only when migration is already known to end successfully.
+	 */
+	mem_cgroup_prepare_migration(page, new_page, &memcg);
 
 	entry = mk_pmd(new_page, vma->vm_page_prot);
 	entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
@@ -850,6 +859,12 @@ migrate:
 	set_pmd_at(mm, haddr, pmd, entry);
 	update_mmu_cache_pmd(vma, address, entry);
 	page_remove_rmap(page);
+	/*
+	 * Finish the charge transaction under the page table lock to
+	 * prevent split_huge_page() from dividing up the charge
+	 * before it's fully transferred to the new page.
+	 */
+	mem_cgroup_end_migration(memcg, page, new_page, true);
 	spin_unlock(&mm->page_table_lock);
 
 	put_page(page);			/* Drop the rmap reference */
@@ -860,6 +875,7 @@ migrate:
 		put_page(page);		/* drop the LRU isolation reference */
 
 	unlock_page(new_page);
+
 	unlock_page(page);
 	put_page(page);			/* Drop the local reference */
 
--- mmotm/mm/memcontrol.c	2012-11-09 09:43:46.896046342 -0800
+++ linux/mm/memcontrol.c	2012-11-13 14:51:04.004321370 -0800
@@ -4186,15 +4186,18 @@ void mem_cgroup_prepare_migration(struct
 				  struct mem_cgroup **memcgp)
 {
 	struct mem_cgroup *memcg = NULL;
+	unsigned int nr_pages = 1;
 	struct page_cgroup *pc;
 	enum charge_type ctype;
 
 	*memcgp = NULL;
 
-	VM_BUG_ON(PageTransHuge(page));
 	if (mem_cgroup_disabled())
 		return;
 
+	if (PageTransHuge(page))
+		nr_pages <<= compound_order(page);
+
 	pc = lookup_page_cgroup(page);
 	lock_page_cgroup(pc);
 	if (PageCgroupUsed(pc)) {
@@ -4256,7 +4259,7 @@ void mem_cgroup_prepare_migration(struct
 	 * charged to the res_counter since we plan on replacing the
 	 * old one and only one page is going to be left afterwards.
 	 */
-	__mem_cgroup_commit_charge(memcg, newpage, 1, ctype, false);
+	__mem_cgroup_commit_charge(memcg, newpage, nr_pages, ctype, false);
 }
 
 /* remove redundant charge if migration failed*/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ