[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1369575522-26405-5-git-send-email-jiang.liu@huawei.com>
Date: Sun, 26 May 2013 21:38:32 +0800
From: Jiang Liu <liuj97@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Jiang Liu <jiang.liu@...wei.com>,
David Rientjes <rientjes@...gle.com>,
Wen Congyang <wency@...fujitsu.com>,
Mel Gorman <mgorman@...e.de>, Minchan Kim <minchan@...nel.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Michal Hocko <mhocko@...e.cz>,
James Bottomley <James.Bottomley@...senPartnership.com>,
Sergei Shtylyov <sergei.shtylyov@...entembedded.com>,
David Howells <dhowells@...hat.com>,
Mark Salter <msalter@...hat.com>,
Jianguo Wu <wujianguo@...wei.com>, linux-mm@...ck.org,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
Yinghai Lu <yinghai@...nel.org>,
Tang Chen <tangchen@...fujitsu.com>
Subject: [PATCH v8, part3 04/14] mm/x86: use free_reserved_area() to simplify code
Use common help function free_reserved_area() to simplify code.
Signed-off-by: Jiang Liu <jiang.liu@...wei.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: x86@...nel.org
Cc: Yinghai Lu <yinghai@...nel.org>
Cc: Tang Chen <tangchen@...fujitsu.com>
Cc: Wen Congyang <wency@...fujitsu.com>
Cc: Jianguo Wu <wujianguo@...wei.com>
Cc: linux-kernel@...r.kernel.org
---
arch/x86/mm/init.c | 14 +++-----------
arch/x86/mm/init_64.c | 5 ++---
2 files changed, 5 insertions(+), 14 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index eaac174..62f1473 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -494,7 +494,6 @@ int devmem_is_allowed(unsigned long pagenr)
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
- unsigned long addr;
unsigned long begin_aligned, end_aligned;
/* Make sure boundaries are page aligned */
@@ -509,8 +508,6 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
if (begin >= end)
return;
- addr = begin;
-
/*
* If debugging page accesses then do not free this memory but
* mark them not present - any buggy init-section access will
@@ -529,18 +526,13 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
set_memory_nx(begin, (end - begin) >> PAGE_SHIFT);
set_memory_rw(begin, (end - begin) >> PAGE_SHIFT);
- printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
-
- for (; addr < end; addr += PAGE_SIZE) {
- memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
- free_reserved_page(virt_to_page(addr));
- }
+ free_reserved_area((void *)begin, (void *)end, POISON_FREE_INITMEM, what);
#endif
}
void free_initmem(void)
{
- free_init_pages("unused kernel memory",
+ free_init_pages("unused kernel",
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
}
@@ -566,7 +558,7 @@ void __init free_initrd_mem(unsigned long start, unsigned long end)
* - relocate_initrd()
* So here We can do PAGE_ALIGN() safely to get partial page to be freed
*/
- free_init_pages("initrd memory", start, PAGE_ALIGN(end));
+ free_init_pages("initrd", start, PAGE_ALIGN(end));
}
#endif
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index bb00c46..32e2f25 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1166,11 +1166,10 @@ void mark_rodata_ro(void)
set_memory_ro(start, (end-start) >> PAGE_SHIFT);
#endif
- free_init_pages("unused kernel memory",
+ free_init_pages("unused kernel",
(unsigned long) __va(__pa_symbol(text_end)),
(unsigned long) __va(__pa_symbol(rodata_start)));
-
- free_init_pages("unused kernel memory",
+ free_init_pages("unused kernel",
(unsigned long) __va(__pa_symbol(rodata_end)),
(unsigned long) __va(__pa_symbol(_sdata)));
}
--
1.8.1.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists