lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1298664299-10270-2-git-send-email-mel@csn.ul.ie>
Date:	Fri, 25 Feb 2011 20:04:58 +0000
From:	Mel Gorman <mel@....ul.ie>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Arthur Marsh <arthur.marsh@...ernode.on.net>,
	Clemens Ladisch <cladisch@...glemail.com>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Mel Gorman <mel@....ul.ie>, Linux-MM <linux-mm@...ck.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: [PATCH 1/2] mm: compaction: Minimise the time IRQs are disabled while isolating free pages

compaction_alloc() isolates free pages to be used as migration targets.
While its scanning, IRQs are disabled on the mistaken assumption the scanning
should be short. Analysis showed that IRQs were in fact being disabled for
substantial time. A simple test was run using large anonymous mappings with
transparent hugepage support enabled to trigger frequent compactions. A
monitor sampled what the worst IRQ-off latencies were and a post-processing
tool found the following;

Total sampled time IRQs off (not real total time): 22355
Event compaction_alloc..compaction_alloc                 8409 us count 1
Event compaction_alloc..compaction_alloc                 7341 us count 1
Event compaction_alloc..compaction_alloc                 2463 us count 1
Event compaction_alloc..compaction_alloc                 2054 us count 1
Event shrink_inactive_list..shrink_zone                  1864 us count 1
Event shrink_inactive_list..shrink_zone                    88 us count 1
Event save_args..call_softirq                              36 us count 1
Event save_args..call_softirq                              35 us count 2
Event __make_request..__blk_run_queue                      24 us count 1
Event __alloc_pages_nodemask..__alloc_pages_nodemask        6 us count 1

i.e. compaction is disabled IRQs for a prolonged period of time - 8ms in
one instance. The full report generated by the tool can be found at
http://www.csn.ul.ie/~mel/postings/minfree-20110225/irqsoff-vanilla-micro.report .
This patch reduces the time IRQs are disabled by simply disabling IRQs
at the last possible minute. An updated IRQs-off summary report then
looks like;

Total sampled time IRQs off (not real total time): 5493
Event shrink_inactive_list..shrink_zone                  1596 us count 1
Event shrink_inactive_list..shrink_zone                  1530 us count 1
Event shrink_inactive_list..shrink_zone                   956 us count 1
Event shrink_inactive_list..shrink_zone                   541 us count 1
Event shrink_inactive_list..shrink_zone                   531 us count 1
Event split_huge_page..add_to_swap                        232 us count 1
Event save_args..call_softirq                              36 us count 1
Event save_args..call_softirq                              35 us count 2
Event __wake_up..__wake_up                                  1 us count 1

A full report is again available at
http://www.csn.ul.ie/~mel/postings/minfree-20110225/irqsoff-minimiseirq-free-v1r4-micro.report .
. As should be obvious, IRQ disabled latencies due to compaction are
almost elimimnated for this particular test.

[aarcange@...hat.com: Fix initialisation of isolated]
Signed-off-by: Mel Gorman <mel@....ul.ie>
---
 mm/compaction.c |   18 +++++++++++++-----
 1 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 8be430b8..11d88a2 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -155,7 +155,6 @@ static void isolate_freepages(struct zone *zone,
 	 * pages on cc->migratepages. We stop searching if the migrate
 	 * and free page scanners meet or enough free pages are isolated.
 	 */
-	spin_lock_irqsave(&zone->lock, flags);
 	for (; pfn > low_pfn && cc->nr_migratepages > nr_freepages;
 					pfn -= pageblock_nr_pages) {
 		unsigned long isolated;
@@ -178,9 +177,19 @@ static void isolate_freepages(struct zone *zone,
 		if (!suitable_migration_target(page))
 			continue;
 
-		/* Found a block suitable for isolating free pages from */
-		isolated = isolate_freepages_block(zone, pfn, freelist);
-		nr_freepages += isolated;
+		/*
+		 * Found a block suitable for isolating free pages from. Now
+		 * we disabled interrupts, double check things are ok and
+		 * isolate the pages. This is to minimise the time IRQs
+		 * are disabled
+		 */
+		isolated = 0;
+		spin_lock_irqsave(&zone->lock, flags);
+		if (suitable_migration_target(page)) {
+			isolated = isolate_freepages_block(zone, pfn, freelist);
+			nr_freepages += isolated;
+		}
+		spin_unlock_irqrestore(&zone->lock, flags);
 
 		/*
 		 * Record the highest PFN we isolated pages from. When next
@@ -190,7 +199,6 @@ static void isolate_freepages(struct zone *zone,
 		if (isolated)
 			high_pfn = max(high_pfn, pfn);
 	}
-	spin_unlock_irqrestore(&zone->lock, flags);
 
 	/* split_free_page does not map the pages */
 	list_for_each_entry(page, freelist, lru) {
-- 
1.7.2.3

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ