lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 21 Apr 2011 18:34:03 -0700 From: John Stultz <john.stultz@...aro.org> To: linux-kernel@...r.kernel.org Cc: Arve Hjønnevåg <arve@...roid.com>, Dave Hansen <dave@...ux.vnet.ibm.com>, Mel Gorman <mgorman@...e.de>, Andrew Morton <akpm@...ux-foundation.org>, John Stultz <john.stultz@...aro.org> Subject: [PATCH] mm: Check if any page in a pageblock is reserved before marking it MIGRATE_RESERVE From: Arve Hjønnevåg <arve@...roid.com> This fixes a problem where the first pageblock got marked MIGRATE_RESERVE even though it only had a few free pages. This in turn caused no contiguous memory to be reserved and frequent kswapd wakeups that emptied the caches to get more contiguous memory. CC: Dave Hansen <dave@...ux.vnet.ibm.com> CC: Mel Gorman <mgorman@...e.de> CC: Andrew Morton <akpm@...ux-foundation.org> Signed-off-by: Arve Hjønnevåg <arve@...roid.com> Acked-by: Mel Gorman <mel@....ul.ie> [This patch was submitted and acked a little over a year ago (see: http://lkml.org/lkml/2010/4/6/172 ), but never seemingly made it upstream. Resending for comments. -jstultz] Signed-off-by: John Stultz <john.stultz@...aro.org> --- mm/page_alloc.c | 16 +++++++++++++++- 1 files changed, 15 insertions(+), 1 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ed87f3b..209d9bf 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3288,6 +3288,20 @@ static inline unsigned long wait_table_bits(unsigned long size) #define LONG_ALIGN(x) (((x)+(sizeof(long))-1)&~((sizeof(long))-1)) /* + * Check if a pageblock contains reserved pages + */ +static int pageblock_is_reserved(unsigned long start_pfn) +{ + unsigned long end_pfn = start_pfn + pageblock_nr_pages; + unsigned long pfn; + + for (pfn = start_pfn; pfn < end_pfn; pfn++) + if (PageReserved(pfn_to_page(pfn))) + return 1; + return 0; +} + +/* * Mark a number of pageblocks as MIGRATE_RESERVE. The number * of blocks reserved is based on min_wmark_pages(zone). The memory within * the reserve will tend to store contiguous free pages. Setting min_free_kbytes @@ -3326,7 +3340,7 @@ static void setup_zone_migrate_reserve(struct zone *zone) continue; /* Blocks with reserved pages will never free, skip them. */ - if (PageReserved(page)) + if (pageblock_is_reserved(pfn)) continue; block_migratetype = get_pageblock_migratetype(page); -- 1.7.3.2.146.gca209 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists