[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1476340749-13281-1-git-send-email-ming.ling@spreadtrum.com>
Date: Thu, 13 Oct 2016 14:39:09 +0800
From: "ming.ling" <ming.ling@...eadtrum.com>
To: <akpm@...ux-foundation.org>, <mgorman@...hsingularity.net>,
<vbabka@...e.cz>, <hannes@...xchg.org>, <mhocko@...e.com>,
<baiyaowei@...s.chinamobile.com>, <iamjoonsoo.kim@....com>,
<minchan@...nel.org>, <rientjes@...gle.com>, <hughd@...gle.com>,
<kirill.shutemov@...ux.intel.com>
CC: <riel@...hat.com>, <mgorman@...e.de>, <aquini@...hat.com>,
<corbet@....net>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <orson.zhai@...eadtrum.com>,
<geng.ren@...eadtrum.com>, <chunyan.zhang@...eadtrum.com>,
<zhizhou.tian@...eadtrum.com>, <yuming.han@...eadtrum.com>,
<xiajing@...eadst.com>, Ming Ling <ming.ling@...eadtrum.com>
Subject: [PATCH v2] mm: exclude isolated non-lru pages from NR_ISOLATED_ANON or NR_ISOLATED_FILE.
From: Ming Ling <ming.ling@...eadtrum.com>
Non-lru pages don't belong to any lru, so counting them to
NR_ISOLATED_ANON or NR_ISOLATED_FILE doesn't make any sense.
It may misguide functions such as pgdat_reclaimable_pages and
too_many_isolated.
On mobile devices such as 512M ram android Phone, it may use
a big zram swap. In some cases zram(zsmalloc) uses too many
non-lru pages, such as:
MemTotal: 468148 kB
Normal free:5620kB
Free swap:4736kB
Total swap:409596kB
ZRAM: 164616kB(zsmalloc non-lru pages)
active_anon:60700kB
inactive_anon:60744kB
active_file:34420kB
inactive_file:37532kB
More non-lru pages which used by zram for swap, it influences
pgdat_reclaimable_pages and too_many_isolated more.
This patch excludes isolated non-lru pages from NR_ISOLATED_ANON
or NR_ISOLATED_FILE to ensure their counts are right.
Signed-off-by: Ming ling <ming.ling@...eadtrum.com>
---
mm/compaction.c | 6 ++++--
mm/migrate.c | 9 +++++----
2 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 0409a4a..ed4c553 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -643,8 +643,10 @@ static void acct_isolated(struct zone *zone, struct compact_control *cc)
if (list_empty(&cc->migratepages))
return;
- list_for_each_entry(page, &cc->migratepages, lru)
- count[!!page_is_file_cache(page)]++;
+ list_for_each_entry(page, &cc->migratepages, lru) {
+ if (likely(!__PageMovable(page)))
+ count[!!page_is_file_cache(page)]++;
+ }
mod_node_page_state(zone->zone_pgdat, NR_ISOLATED_ANON, count[0]);
mod_node_page_state(zone->zone_pgdat, NR_ISOLATED_FILE, count[1]);
diff --git a/mm/migrate.c b/mm/migrate.c
index 99250ae..abe48cc 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -168,8 +168,6 @@ void putback_movable_pages(struct list_head *l)
continue;
}
list_del(&page->lru);
- dec_node_page_state(page, NR_ISOLATED_ANON +
- page_is_file_cache(page));
/*
* We isolated non-lru movable page so here we can use
* __PageMovable because LRU page's mapping cannot have
@@ -185,6 +183,8 @@ void putback_movable_pages(struct list_head *l)
unlock_page(page);
put_page(page);
} else {
+ dec_node_page_state(page, NR_ISOLATED_ANON +
+ page_is_file_cache(page));
putback_lru_page(page);
}
}
@@ -1121,8 +1121,9 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page,
* restored.
*/
list_del(&page->lru);
- dec_node_page_state(page, NR_ISOLATED_ANON +
- page_is_file_cache(page));
+ if (likely(!__PageMovable(page)))
+ dec_node_page_state(page, NR_ISOLATED_ANON +
+ page_is_file_cache(page));
}
/*
--
1.9.1
Powered by blists - more mailing lists