lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110426185114.F3A4.A69D9226@jp.fujitsu.com>
Date:	Tue, 26 Apr 2011 18:49:39 +0900 (JST)
From:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To:	Mel Gorman <mgorman@...e.de>
Cc:	kosaki.motohiro@...fujitsu.com,
	John Stultz <john.stultz@...aro.org>,
	linux-kernel@...r.kernel.org, Arve Hj?nnev?g <arve@...roid.com>,
	Dave Hansen <dave@...ux.vnet.ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] mm: Check if any page in a pageblock is reserved before marking it MIGRATE_RESERVE

> On Thu, Apr 21, 2011 at 06:34:03PM -0700, John Stultz wrote:
> > From: Arve Hjønnevåg <arve@...roid.com>
> > 
> > This fixes a problem where the first pageblock got marked MIGRATE_RESERVE even
> > though it only had a few free pages. This in turn caused no contiguous memory
> > to be reserved and frequent kswapd wakeups that emptied the caches to get more
> > contiguous memory.
> > 
> > CC: Dave Hansen <dave@...ux.vnet.ibm.com>
> > CC: Mel Gorman <mgorman@...e.de>
> > CC: Andrew Morton <akpm@...ux-foundation.org>
> > Signed-off-by: Arve Hjønnevåg <arve@...roid.com>
> > Acked-by: Mel Gorman <mel@....ul.ie>
> > 
> > [This patch was submitted and acked a little over a year ago
> > (see: http://lkml.org/lkml/2010/4/6/172 ), but never seemingly
> > made it upstream. Resending for comments. -jstultz]
> > 
> > Signed-off-by: John Stultz <john.stultz@...aro.org>
> 
> Whoops, should have spotted it slipped through. FWIW, I'm still happy
> with my Ack being stuck onto it.

Hehe, No.

You acked another patch at last year and John taked up old one. Sigh.
Look,  correct one has pfn_valid_within(). 
	http://lkml.org/lkml/2010/4/6/172

And, Minchan suggested to add more explanation to the description. Then, I think
following is desiable one.



Subject: [PATCH] mm: Check if any page in a pageblock is reserved before marking it MIGRATE_RESERVE
From: Arve Hjonnevag <arve@...roid.com>

This fixes a problem where the first pageblock got marked MIGRATE_RESERVE even
though it only had a few free pages. eg, On current ARM port, The kernel starts
at offset 0x8000 to leave room for boot parameters, and the memory is freed later.

This in turn caused no contiguous memory to be reserved and frequent kswapd
wakeups that emptied the caches to get more contiguous memory.

Unfortunatelly, ARM need order-2 allocation for pgd (see arm/mm/pgd.c#pgd_alloc()).
Therefore the issue is not minor nor easy avoidable.

CC: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Arve Hjonnevag <arve@...roid.com>
Acked-by: Mel Gorman <mel@....ul.ie>
Acked-by: Dave Hansen <dave@...ux.vnet.ibm.com>
Signed-off-by: John Stultz <john.stultz@...aro.org>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com> [added a
few explanation]
---
 mm/page_alloc.c |   16 +++++++++++++++-
 1 files changed, 15 insertions(+), 1 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1d5c189..10d9fa7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3282,6 +3282,20 @@ static inline unsigned long wait_table_bits(unsigned long size)
 #define LONG_ALIGN(x) (((x)+(sizeof(long))-1)&~((sizeof(long))-1))
 
 /*
+ * Check if a pageblock contains reserved pages
+ */
+static int pageblock_is_reserved(unsigned long start_pfn)
+{
+	unsigned long end_pfn = start_pfn + pageblock_nr_pages;
+	unsigned long pfn;
+
+	for (pfn = start_pfn; pfn < end_pfn; pfn++)
+		if (!pfn_valid_within(pfn) || PageReserved(pfn_to_page(pfn)))
+			return 1;
+	return 0;
+}
+
+/*
  * Mark a number of pageblocks as MIGRATE_RESERVE. The number
  * of blocks reserved is based on min_wmark_pages(zone). The memory within
  * the reserve will tend to store contiguous free pages. Setting min_free_kbytes
@@ -3320,7 +3334,7 @@ static void setup_zone_migrate_reserve(struct zone *zone)
 			continue;
 
 		/* Blocks with reserved pages will never free, skip them. */
-		if (PageReserved(page))
+		if (pageblock_is_reserved(pfn))
 			continue;
 
 		block_migratetype = get_pageblock_migratetype(page);
-- 
1.7.3.1



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ