[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1365256509-29024-5-git-send-email-jiang.liu@huawei.com>
Date: Sat, 6 Apr 2013 21:54:58 +0800
From: Jiang Liu <liuj97@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Jiang Liu <jiang.liu@...wei.com>,
David Rientjes <rientjes@...gle.com>,
Wen Congyang <wency@...fujitsu.com>,
Mel Gorman <mgorman@...e.de>, Minchan Kim <minchan@...nel.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Michal Hocko <mhocko@...e.cz>,
James Bottomley <James.Bottomley@...senPartnership.com>,
Sergei Shtylyov <sergei.shtylyov@...entembedded.com>,
David Howells <dhowells@...hat.com>,
Mark Salter <msalter@...hat.com>,
Jianguo Wu <wujianguo@...wei.com>, linux-mm@...ck.org,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
Yinghai Lu <yinghai@...nel.org>,
Tang Chen <tangchen@...fujitsu.com>
Subject: [PATCH v4, part3 04/15] mm/x86: use free_reserved_area() to simplify code
Use common help function free_reserved_area() to simplify code.
Signed-off-by: Jiang Liu <jiang.liu@...wei.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: x86@...nel.org
Cc: Yinghai Lu <yinghai@...nel.org>
Cc: Tang Chen <tangchen@...fujitsu.com>
Cc: Wen Congyang <wency@...fujitsu.com>
Cc: Jianguo Wu <wujianguo@...wei.com>
Cc: linux-kernel@...r.kernel.org
---
arch/x86/mm/init.c | 14 +++-----------
arch/x86/mm/init_64.c | 5 ++---
2 files changed, 5 insertions(+), 14 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index fdc5dca..6738e1b 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -477,7 +477,6 @@ int devmem_is_allowed(unsigned long pagenr)
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
- unsigned long addr;
unsigned long begin_aligned, end_aligned;
/* Make sure boundaries are page aligned */
@@ -492,8 +491,6 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
if (begin >= end)
return;
- addr = begin;
-
/*
* If debugging page accesses then do not free this memory but
* mark them not present - any buggy init-section access will
@@ -512,18 +509,13 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
set_memory_nx(begin, (end - begin) >> PAGE_SHIFT);
set_memory_rw(begin, (end - begin) >> PAGE_SHIFT);
- printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
-
- for (; addr < end; addr += PAGE_SIZE) {
- memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
- free_reserved_page(virt_to_page(addr));
- }
+ free_reserved_area(begin, end, POISON_FREE_INITMEM, what);
#endif
}
void free_initmem(void)
{
- free_init_pages("unused kernel memory",
+ free_init_pages("unused kernel",
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
}
@@ -549,7 +541,7 @@ void __init free_initrd_mem(unsigned long start, unsigned long end)
* - relocate_initrd()
* So here We can do PAGE_ALIGN() safely to get partial page to be freed
*/
- free_init_pages("initrd memory", start, PAGE_ALIGN(end));
+ free_init_pages("initrd", start, PAGE_ALIGN(end));
}
#endif
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index caad9a0..0c6efb8 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1165,11 +1165,10 @@ void mark_rodata_ro(void)
set_memory_ro(start, (end-start) >> PAGE_SHIFT);
#endif
- free_init_pages("unused kernel memory",
+ free_init_pages("unused kernel",
(unsigned long) __va(__pa_symbol(text_end)),
(unsigned long) __va(__pa_symbol(rodata_start)));
-
- free_init_pages("unused kernel memory",
+ free_init_pages("unused kernel",
(unsigned long) __va(__pa_symbol(rodata_end)),
(unsigned long) __va(__pa_symbol(_sdata)));
}
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists