[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181106113149.GC24198@intel.com>
Date: Tue, 6 Nov 2018 19:31:49 +0800
From: Aaron Lu <aaron.lu@...el.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Paweł Staszewski <pstaszewski@...are.pl>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Eric Dumazet <eric.dumazet@...il.com>,
Tariq Toukan <tariqt@...lanox.com>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Yoel Caspersen <yoel@...knet.dk>,
Mel Gorman <mgorman@...hsingularity.net>,
Saeed Mahameed <saeedm@...lanox.com>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Alexander Duyck <alexander.h.duyck@...ux.intel.com>
Subject: [PATCH v3 2/2] mm/page_alloc: use a single function to free page
We have multiple places of freeing a page, most of them doing similar
things and a common function can be used to reduce code duplicate.
It also avoids bug fixed in one function but left in another.
Signed-off-by: Aaron Lu <aaron.lu@...el.com>
---
v3: Vlastimil mentioned the possible performance loss by using
page_ref_sub_and_test(page, 1) for put_page_testzero(page), since
we aren't sure so be safe by keeping page ref decreasing code as
is, only move freeing page part to a common function.
mm/page_alloc.c | 37 ++++++++++++++-----------------------
1 file changed, 14 insertions(+), 23 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 91a9a6af41a2..431a03aa96f8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4425,16 +4425,19 @@ unsigned long get_zeroed_page(gfp_t gfp_mask)
}
EXPORT_SYMBOL(get_zeroed_page);
-void __free_pages(struct page *page, unsigned int order)
+static inline void free_the_page(struct page *page, unsigned int order)
{
- if (put_page_testzero(page)) {
- if (order == 0)
- free_unref_page(page);
- else
- __free_pages_ok(page, order);
- }
+ if (order == 0)
+ free_unref_page(page);
+ else
+ __free_pages_ok(page, order);
}
+void __free_pages(struct page *page, unsigned int order)
+{
+ if (put_page_testzero(page))
+ free_the_page(page, order);
+}
EXPORT_SYMBOL(__free_pages);
void free_pages(unsigned long addr, unsigned int order)
@@ -4483,14 +4486,8 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
{
VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
- if (page_ref_sub_and_test(page, count)) {
- unsigned int order = compound_order(page);
-
- if (order == 0)
- free_unref_page(page);
- else
- __free_pages_ok(page, order);
- }
+ if (page_ref_sub_and_test(page, count))
+ free_the_page(page, compound_order(page));
}
EXPORT_SYMBOL(__page_frag_cache_drain);
@@ -4555,14 +4552,8 @@ void page_frag_free(void *addr)
{
struct page *page = virt_to_head_page(addr);
- if (unlikely(put_page_testzero(page))) {
- unsigned int order = compound_order(page);
-
- if (order == 0)
- free_unref_page(page);
- else
- __free_pages_ok(page, order);
- }
+ if (unlikely(put_page_testzero(page)))
+ free_the_page(page, compound_order(page));
}
EXPORT_SYMBOL(page_frag_free);
--
2.17.2
Powered by blists - more mailing lists