lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1368028987-8369-17-git-send-email-mgorman@suse.de>
Date:	Wed,  8 May 2013 17:03:01 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	Linux-MM <linux-mm@...ck.org>
Cc:	Johannes Weiner <hannes@...xchg.org>, Dave Hansen <dave@...1.net>,
	Christoph Lameter <cl@...ux.com>,
	LKML <linux-kernel@...r.kernel.org>, Mel Gorman <mgorman@...e.de>
Subject: [PATCH 16/22] mm: page allocator: Remove coalescing improvement heuristic during page free

Commit 6dda9d55 ( page allocator: reduce fragmentation in buddy
allocator by adding buddies that are merging to the tail of the free
lists) classified pages according to their probability of being part of
a high order merge. This made sense when the number of pages being freed
was relatively small as part of a per-cpu list drain.

However, with the introduction of magazines, a drain of the magazines
frees larger number of pages in batch and the heuristic is less likely
to benefit but adds a lot of weight to the free path in the normal case.
The free path can be very hot for workloads with short-lived processes,
are fault intensive or work with many in-kernel short-lived buffers. As
THP is the main benefit of such a heuristic, it's too marginal a gain to
impact the free path so heavily, remove it.

Signed-off-by: Mel Gorman <mgorman@...e.de>
---
 mm/page_alloc.c | 22 ----------------------
 1 file changed, 22 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b30abe8..6760e00 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -577,29 +577,7 @@ static inline void __free_one_page(struct page *page,
 	}
 	set_page_order(page, order);
 
-	/*
-	 * If this is not the largest possible page, check if the buddy
-	 * of the next-highest order is free. If it is, it's possible
-	 * that pages are being freed that will coalesce soon. In case,
-	 * that is happening, add the free page to the tail of the list
-	 * so it's less likely to be used soon and more likely to be merged
-	 * as a higher order page
-	 */
-	if ((order < MAX_ORDER-2) && pfn_valid_within(page_to_pfn(buddy))) {
-		struct page *higher_page, *higher_buddy;
-		combined_idx = buddy_idx & page_idx;
-		higher_page = page + (combined_idx - page_idx);
-		buddy_idx = __find_buddy_index(combined_idx, order + 1);
-		higher_buddy = higher_page + (buddy_idx - combined_idx);
-		if (page_is_buddy(higher_page, higher_buddy, order + 1)) {
-			list_add_tail(&page->lru,
-				&zone->free_area[order].free_list[migratetype]);
-			goto out;
-		}
-	}
-
 	list_add(&page->lru, &zone->free_area[order].free_list[migratetype]);
-out:
 	zone->free_area[order].nr_free++;
 }
 
-- 
1.8.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ