lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170623085345.11304-7-mhocko@kernel.org>
Date:   Fri, 23 Jun 2017 10:53:45 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Vlastimil Babka <vbabka@...e.cz>,
        Johannes Weiner <hannes@...xchg.org>,
        Mel Gorman <mgorman@...e.de>, NeilBrown <neilb@...e.com>,
        LKML <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
        Michal Hocko <mhocko@...e.com>
Subject: [PATCH 6/6] mm, migration: do not trigger OOM killer when migrating memory

From: Michal Hocko <mhocko@...e.com>

Page migration (for memory hotplug, soft_offline_page or mbind) needs
to allocate a new memory. This can trigger an oom killer if the target
memory is depleated. Although quite unlikely, still possible, especially
for the memory hotplug (offlining of memoery). Up to now we didn't
really have reasonable means to back off. __GFP_NORETRY can fail just
too easily and __GFP_THISNODE sticks to a single node and that is not
suitable for all callers.

But now that we have __GFP_RETRY_MAYFAIL we should use it.  It is
preferable to fail the migration than disrupt the system by killing some
processes.

Signed-off-by: Michal Hocko <mhocko@...e.com>
---
 include/linux/migrate.h | 2 +-
 mm/memory-failure.c     | 3 ++-
 mm/mempolicy.c          | 3 ++-
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index f80c9882403a..9f5885dae80e 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -34,7 +34,7 @@ extern char *migrate_reason_names[MR_TYPES];
 static inline struct page *new_page_nodemask(struct page *page, int preferred_nid,
 		nodemask_t *nodemask)
 {
-	gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE;
+	gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL;
 
 	if (PageHuge(page))
 		return alloc_huge_page_nodemask(page_hstate(compound_head(page)),
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index e2e0cb0e1d0f..fe0c484c6fdb 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1492,7 +1492,8 @@ static struct page *new_page(struct page *p, unsigned long private, int **x)
 
 		return alloc_huge_page_node(hstate, nid);
 	} else {
-		return __alloc_pages_node(nid, GFP_HIGHUSER_MOVABLE, 0);
+		return __alloc_pages_node(nid,
+				GFP_HIGHUSER_MOVABLE | __GFP_RETRY_MAYFAIL, 0);
 	}
 }
 
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 7d8e56214ac0..d911fa5cb2a7 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1078,7 +1078,8 @@ static struct page *new_page(struct page *page, unsigned long start, int **x)
 	/*
 	 * if !vma, alloc_page_vma() will use task or system default policy
 	 */
-	return alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
+	return alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_RETRY_MAYFAIL,
+			vma, address);
 }
 #else
 
-- 
2.11.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ