lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Tue,  1 Mar 2011 13:59:06 +0900
From:	Minchan Kim <minchan.kim@...il.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	linux-mm <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
	Johannes Weiner <hannes@...xchg.org>,
	Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Minchan Kim <minchan.kim@...il.com>,
	Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: [PATCH v2] memcg: clean up migration

This patch cleans up unncessary BUG_ON check and confusing
charge variable.

That's because memcg charge/uncharge could be handled by
mem_cgroup_[prepare/end] migration itself so charge local variable
in unmap_and_move lost the role since we introduced 
[01b1ae6 memcg: simple migration handling]

And mem_cgroup_prepare_migration return 0 if only it is successful.
Otherwise, it jumps to unlock label to clean up so BUG_ON(charge)
isn't meaningless.

Reviewed-by: Johannes Weiner <hannes@...xchg.org>
Acked-by: Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: Balbir Singh <balbir@...ux.vnet.ibm.com>
Signed-off-by: Minchan Kim <minchan.kim@...il.com>

* Change from v1
  - add acked-by/reviewed-by
  - change typo

---
 mm/memcontrol.c |    1 +
 mm/migrate.c    |   14 ++++----------
 2 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 2fc97fc..6832926 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2872,6 +2872,7 @@ static inline int mem_cgroup_move_swap_account(swp_entry_t entry,
 /*
  * Before starting migration, account PAGE_SIZE to mem_cgroup that the old
  * page belongs to.
+ * Return 0 if charge is successful. Otherwise return -errno.
  */
 int mem_cgroup_prepare_migration(struct page *page,
 	struct page *newpage, struct mem_cgroup **ptr, gfp_t gfp_mask)
diff --git a/mm/migrate.c b/mm/migrate.c
index eb083a6..737c2e5 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -622,7 +622,6 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
 	int *result = NULL;
 	struct page *newpage = get_new_page(page, private, &result);
 	int remap_swapcache = 1;
-	int charge = 0;
 	struct mem_cgroup *mem;
 	struct anon_vma *anon_vma = NULL;
 
@@ -637,9 +636,7 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
 		if (unlikely(split_huge_page(page)))
 			goto move_newpage;
 
-	/* prepare cgroup just returns 0 or -ENOMEM */
 	rc = -EAGAIN;
-
 	if (!trylock_page(page)) {
 		if (!force)
 			goto move_newpage;
@@ -678,13 +675,11 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
 	}
 
 	/* charge against new page */
-	charge = mem_cgroup_prepare_migration(page, newpage, &mem, GFP_KERNEL);
-	if (charge == -ENOMEM) {
-		rc = -ENOMEM;
+	rc = mem_cgroup_prepare_migration(page, newpage, &mem, GFP_KERNEL);
+	if (rc)
 		goto unlock;
-	}
-	BUG_ON(charge);
 
+	rc = -EAGAIN;
 	if (PageWriteback(page)) {
 		if (!force || !sync)
 			goto uncharge;
@@ -767,8 +762,7 @@ skip_unmap:
 		drop_anon_vma(anon_vma);
 
 uncharge:
-	if (!charge)
-		mem_cgroup_end_migration(mem, page, newpage, rc == 0);
+	mem_cgroup_end_migration(mem, page, newpage, rc == 0);
 unlock:
 	unlock_page(page);
 
-- 
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ