lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1475819136-24358-5-git-send-email-minchan@kernel.org>
Date:   Fri,  7 Oct 2016 14:45:36 +0900
From:   Minchan Kim <minchan@...nel.org>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Mel Gorman <mgorman@...hsingularity.net>,
        Vlastimil Babka <vbabka@...e.cz>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Sangseok Lee <sangseok.lee@....com>,
        Minchan Kim <minchan@...nel.org>
Subject: [PATCH 4/4] mm: skip to reserve pageblock crossed zone boundary for HIGHATOMIC

In CONFIG_SPARSEMEM, VM shares a pageblock_flags of a mem_section
between two zones if the pageblock cross zone boundaries. It means
a zone lock cannot protect pageblock migratype change's race.

It might be not a problem because migratetype inherently was racy
but intrdocuing with CMA, it was not true any more and have been fixed.
(I hope it should be solved more general approach however...)
And then, it's time for MIGRATE_HIGHATOMIC.

More importantly, HIGHATOMIC migratetype is not big(i.e., 1%) reserve
in system so let's skip such crippled pageblock to try to reserve
full 1% free memory.

Debugged-by: Joonsoo Kim <iamjoonsoo.kim@....com>
Signed-off-by: Minchan Kim <minchan@...nel.org>
---
 mm/page_alloc.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eeb047bb0e9d..d76bb50baf61 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2098,6 +2098,24 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
 	mt = get_pageblock_migratetype(page);
 	if (mt != MIGRATE_HIGHATOMIC &&
 			!is_migrate_isolate(mt) && !is_migrate_cma(mt)) {
+		/*
+		 * If the pageblock cross zone boundaries, we need both
+		 * zone locks but doesn't want to make complex because
+		 * highatomic pageblock is small so that we want to reserve
+		 * sane(?) pageblock.
+		 */
+		unsigned long start_pfn, end_pfn;
+
+		start_pfn = page_to_pfn(page);
+		start_pfn = start_pfn & ~(pageblock_nr_pages - 1);
+
+		if (!zone_spans_pfn(zone, start_pfn))
+			goto out_unlock;
+
+		end_pfn = start_pfn + pageblock_nr_pages - 1;
+		if (!zone_spans_pfn(zone, end_pfn))
+			goto out_unlock;
+
 		zone->nr_reserved_highatomic += pageblock_nr_pages;
 		set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC);
 		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC);
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ