[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1d32d83e54542050dba3f711a8d10b1e951a9a58.1399705884.git.nasa4836@gmail.com>
Date: Sat, 10 May 2014 15:15:39 +0800
From: Jianyu Zhan <nasa4836@...il.com>
To: akpm@...ux-foundation.org, mgorman@...e.de,
cody@...ux.vnet.ibm.com, liuj97@...il.com,
zhangyanfei@...fujitsu.com, srivatsa.bhat@...ux.vnet.ibm.com,
dave@...1.net, iamjoonsoo.kim@....com, n-horiguchi@...jp.nec.com,
kirill.shutemov@...ux.intel.com, schwidefsky@...ibm.com,
nasa4836@...il.com, gorcunov@...il.com, riel@...hat.com,
cl@...ux.com, toshi.kani@...com, paul.gortmaker@...driver.com
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: [PATCH 1/3] mm: add comment for __mod_zone_page_stat
__mod_zone_page_stat() is not irq-safe, so it should be used carefully.
And it is not appropirately documented now. This patch adds comment for
it, and also documents for some of its call sites.
Suggested-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Jianyu Zhan <nasa4836@...il.com>
---
mm/page_alloc.c | 2 ++
mm/rmap.c | 6 ++++++
mm/vmstat.c | 16 +++++++++++++++-
3 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5dba293..9d6f474 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -659,6 +659,8 @@ static inline int free_pages_check(struct page *page)
*
* And clear the zone's pages_scanned counter, to hold off the "all pages are
* pinned" detection logic.
+ *
+ * Note: this function should be used with irq disabled.
*/
static void free_pcppages_bulk(struct zone *zone, int count,
struct per_cpu_pages *pcp)
diff --git a/mm/rmap.c b/mm/rmap.c
index 9c3e773..6078a30 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -979,6 +979,8 @@ void page_add_anon_rmap(struct page *page,
/*
* Special version of the above for do_swap_page, which often runs
* into pages that are exclusively owned by the current process.
+ * So we could use the irq-unsafe version __{inc|mod}_zone_page_stat
+ * here without others racing change it in between.
* Everybody else should continue to use page_add_anon_rmap above.
*/
void do_page_add_anon_rmap(struct page *page,
@@ -1077,6 +1079,10 @@ void page_remove_rmap(struct page *page)
/*
* Hugepages are not counted in NR_ANON_PAGES nor NR_FILE_MAPPED
* and not charged by memcg for now.
+ *
+ * And we are the last user of this page, so it is safe to use
+ * the irq-unsafe version __{mod|dec}_zone_page here, since we
+ * have no racer.
*/
if (unlikely(PageHuge(page)))
goto out;
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 302dd07..778f154 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -207,7 +207,21 @@ void set_pgdat_percpu_threshold(pg_data_t *pgdat,
}
/*
- * For use when we know that interrupts are disabled.
+ * Optimized modificatoin function.
+ *
+ * The code basically does the modification in two steps:
+ *
+ * 1. read the current counter based on the processor number
+ * 2. modificate the counter write it back.
+ *
+ * So this function should be used with the guarantee that
+ *
+ * 1. interrupts are disabled, or
+ * 2. interrupts are enabled, but no other sites would race to
+ * modify this counter in between.
+ *
+ * Otherwise, an irq-safe version mod_zone_page_state() should
+ * be used instead.
*/
void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item,
int delta)
--
2.0.0-rc1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists