[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130830131902.4947.17975.stgit@srivatsabhat.in.ibm.com>
Date: Fri, 30 Aug 2013 18:49:05 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To: akpm@...ux-foundation.org, mgorman@...e.de, hannes@...xchg.org,
tony.luck@...el.com, matthew.garrett@...ula.com, dave@...1.net,
riel@...hat.com, arjan@...ux.intel.com,
srinivas.pandruvada@...ux.intel.com, willy@...ux.intel.com,
kamezawa.hiroyu@...fujitsu.com, lenb@...nel.org, rjw@...k.pl
Cc: gargankita@...il.com, paulmck@...ux.vnet.ibm.com,
svaidy@...ux.vnet.ibm.com, andi@...stfloor.org,
isimatu.yasuaki@...fujitsu.com, santosh.shilimkar@...com,
kosaki.motohiro@...il.com, srivatsa.bhat@...ux.vnet.ibm.com,
linux-pm@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [RFC PATCH v3 17/35] mm: Add aggressive bias to prefer lower regions
during page allocation
While allocating pages from buddy freelists, there could be situations
in which we have a ready freepage of the required order in a *higher*
numbered memory region, and there also exists a freepage of a higher
page order in a *lower* numbered memory region.
To make the consolidation logic more aggressive, try to split up the
higher order buddy page of a lower numbered region and allocate it,
rather than allocating pages from a higher numbered region.
This ensures that we spill over to a new region only when we truly
don't have enough contiguous memory in any lower numbered region to
satisfy that allocation request.
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@...ux.vnet.ibm.com>
---
mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++----------
1 file changed, 34 insertions(+), 10 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6e711b9..0cc2a3e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1210,8 +1210,9 @@ static inline
struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
int migratetype)
{
- unsigned int current_order;
- struct free_area * area;
+ unsigned int current_order, alloc_order;
+ struct free_area *area, *other_area;
+ int alloc_region, other_region;
struct page *page;
/* Find a page of the appropriate size in the preferred list */
@@ -1220,17 +1221,40 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
if (list_empty(&area->free_list[migratetype].list))
continue;
- page = list_entry(area->free_list[migratetype].list.next,
- struct page, lru);
- rmqueue_del_from_freelist(page, &area->free_list[migratetype],
- current_order);
- rmv_page_order(page);
- area->nr_free--;
- expand(zone, page, order, current_order, area, migratetype);
- return page;
+ alloc_order = current_order;
+ alloc_region = area->free_list[migratetype].next_region -
+ area->free_list[migratetype].mr_list;
+ current_order++;
+ goto try_others;
}
return NULL;
+
+try_others:
+ /* Try to aggressively prefer lower numbered regions for allocations */
+ for ( ; current_order < MAX_ORDER; ++current_order) {
+ other_area = &(zone->free_area[current_order]);
+ if (list_empty(&other_area->free_list[migratetype].list))
+ continue;
+
+ other_region = other_area->free_list[migratetype].next_region -
+ other_area->free_list[migratetype].mr_list;
+
+ if (other_region < alloc_region) {
+ alloc_region = other_region;
+ alloc_order = current_order;
+ }
+ }
+
+ area = &(zone->free_area[alloc_order]);
+ page = list_entry(area->free_list[migratetype].list.next, struct page,
+ lru);
+ rmqueue_del_from_freelist(page, &area->free_list[migratetype],
+ alloc_order);
+ rmv_page_order(page);
+ area->nr_free--;
+ expand(zone, page, order, alloc_order, area, migratetype);
+ return page;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists