[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1419500608-11656-3-git-send-email-zhuhui@xiaomi.com>
Date: Thu, 25 Dec 2014 17:43:27 +0800
From: Hui Zhu <zhuhui@...omi.com>
To: <m.szyprowski@...sung.com>, <mina86@...a86.com>,
<akpm@...ux-foundation.org>, <iamjoonsoo.kim@....com>,
<aneesh.kumar@...ux.vnet.ibm.com>, <pintu.k@...sung.com>,
<weijie.yang@...sung.com>, <mgorman@...e.de>, <hannes@...xchg.org>,
<riel@...hat.com>, <vbabka@...e.cz>,
<laurent.pinchart+renesas@...asonboard.com>, <rientjes@...gle.com>,
<sasha.levin@...cle.com>, <liuweixing@...omi.com>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
CC: <teawater@...il.com>, Hui Zhu <zhuhui@...omi.com>
Subject: [PATCH 2/3] CMA: Fix the issue that nr_try_movable just count MIGRATE_MOVABLE memory
One of my plotform that use Joonsoo's CMA patch [1] has a device that
will alloc a lot of MIGRATE_UNMOVABLE memory when it works in a zone.
When this device works, the memory status of this zone is not OK. Most of
CMA is not allocated but most normal memory is allocated.
This issue is because in __rmqueue:
if (IS_ENABLED(CONFIG_CMA) &&
migratetype == MIGRATE_MOVABLE && zone->managed_cma_pages)
page = __rmqueue_cma(zone, order);
Just allocated MIGRATE_MOVABLE will be record in nr_try_movable in function
__rmqueue_cma but not the others. This device allocated a lot of
MIGRATE_UNMOVABLE memory affect the behavior of this zone memory allocation.
This patch change __rmqueue to let nr_try_movable record all the memory
allocation of normal memory.
[1] https://lkml.org/lkml/2014/5/28/64
Signed-off-by: Hui Zhu <zhuhui@...omi.com>
Signed-off-by: Weixing Liu <liuweixing@...omi.com>
---
mm/page_alloc.c | 41 ++++++++++++++++++++---------------------
1 file changed, 20 insertions(+), 21 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a8d9f03..a5bbc38 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1301,28 +1301,23 @@ static struct page *__rmqueue_cma(struct zone *zone, unsigned int order)
{
struct page *page;
- if (zone->nr_try_movable > 0)
- goto alloc_movable;
+ if (zone->nr_try_cma <= 0) {
+ /* Reset counter */
+ zone->nr_try_movable = zone->max_try_movable;
+ zone->nr_try_cma = zone->max_try_cma;
- if (zone->nr_try_cma > 0) {
- /* Okay. Now, we can try to allocate the page from cma region */
- zone->nr_try_cma -= 1 << order;
- page = __rmqueue_smallest(zone, order, MIGRATE_CMA);
-
- /* CMA pages can vanish through CMA allocation */
- if (unlikely(!page && order == 0))
- zone->nr_try_cma = 0;
-
- return page;
+ return NULL;
}
- /* Reset counter */
- zone->nr_try_movable = zone->max_try_movable;
- zone->nr_try_cma = zone->max_try_cma;
+ /* Okay. Now, we can try to allocate the page from cma region */
+ zone->nr_try_cma -= 1 << order;
+ page = __rmqueue_smallest(zone, order, MIGRATE_CMA);
-alloc_movable:
- zone->nr_try_movable -= 1 << order;
- return NULL;
+ /* CMA pages can vanish through CMA allocation */
+ if (unlikely(!page && order == 0))
+ zone->nr_try_cma = 0;
+
+ return page;
}
#endif
@@ -1335,9 +1330,13 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order,
{
struct page *page = NULL;
- if (IS_ENABLED(CONFIG_CMA) &&
- migratetype == MIGRATE_MOVABLE && zone->managed_cma_pages)
- page = __rmqueue_cma(zone, order);
+ if (IS_ENABLED(CONFIG_CMA) && zone->managed_cma_pages) {
+ if (migratetype == MIGRATE_MOVABLE
+ && zone->nr_try_movable <= 0)
+ page = __rmqueue_cma(zone, order);
+ else
+ zone->nr_try_movable -= 1 << order;
+ }
retry_reserve:
if (!page)
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists