lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 12 Nov 2007 21:25:26 +0300
From:	Alexey Dobriyan <adobriyan@...il.com>
To:	mel@....ul.ie, akpm@...l.org
Cc:	torvalds@...l.org, linux-kernel@...r.kernel.org,
	linux-mm@...r.kernel.org
Subject: Major mke2fs slowdown (reproducable, bisected)

Cross-compile farm here migrated to .ccache and build dir on separate
disks and now I have a way to blow up .ccache without waiting half an
hour for rm(1) to finish. It's called mke2fs(8).

However, in e.g 2.6.24-rc2 mke2fs is amazingly slow if done right after
several fat cross-compile builds. Normally it takes ~11 seconds to
finish. After commit 5adc5be7cd1bcef6bb64f5255d2a33f20a3cf5be aka
"Bias the placement of kernel pages at lower PFNs" it takes several
minutes. 2.6.24-rc2 without this patch also gives normal mkfs speeds.
I'm pretty sure bisection wasn't screwed up.


Details:

	Core 2 Duo on x86_64, no debugging
	4G RAM
	30G ext2 .ccache partition with noatime
	100G ext2 partition with source tree and build dirs with noatime

I build alpha-allnoconfig, alpha-defconfig and 4 allmodconfigs
(SMP=y/n x DEBUG_KERNEL=y/n).

Right after compilation finishes, free(1) reports more or less the same
picture (VM hackers, please, tell me which info you need):

             total       used       free     shared    buffers     cached
Mem:       4032320    2802604    1229716          0      97160    2424816
-/+ buffers/cache:     280628    3751692
Swap:      7823644          0    7823644

Last steps of build script are:

	umount /home/ad/.ccache
	sudo mkfs.ext2 -m 0 	<=== this is slow

I can prepare standalone script with more affordable x86_64 configs if
needed.


commit 5adc5be7cd1bcef6bb64f5255d2a33f20a3cf5be
Author: Mel Gorman <mel@....ul.ie>
Date:   Tue Oct 16 01:25:54 2007 -0700

    Bias the placement of kernel pages at lower PFNs
    
    This patch chooses blocks with lower PFNs when placing kernel allocations.
    This is particularly important during fallback in low memory situations to
    stop unmovable pages being placed throughout the entire address space.
    
    Signed-off-by: Mel Gorman <mel@....ul.ie>
    Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 676aec9..e1d87ee 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -765,6 +765,23 @@ int move_freepages_block(struct zone *zone, struct page *page, int migratetype)
 	return move_freepages(zone, start_page, end_page, migratetype);
 }
 
+/* Return the page with the lowest PFN in the list */
+static struct page *min_page(struct list_head *list)
+{
+	unsigned long min_pfn = -1UL;
+	struct page *min_page = NULL, *page;;
+
+	list_for_each_entry(page, list, lru) {
+		unsigned long pfn = page_to_pfn(page);
+		if (pfn < min_pfn) {
+			min_pfn = pfn;
+			min_page = page;
+		}
+	}
+
+	return min_page;
+}
+
 /* Remove an element from the buddy allocator from the fallback list */
 static struct page *__rmqueue_fallback(struct zone *zone, int order,
 						int start_migratetype)
@@ -795,8 +812,11 @@ retry:
 			if (list_empty(&area->free_list[migratetype]))
 				continue;
 
+			/* Bias kernel allocations towards low pfns */
 			page = list_entry(area->free_list[migratetype].next,
 					struct page, lru);
+			if (unlikely(start_migratetype != MIGRATE_MOVABLE))
+				page = min_page(&area->free_list[migratetype]);
 			area->nr_free--;
 
 			/*
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ