lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 05 Oct 2013 14:52:09 +0800 From: Zhang Yanfei <zhangyanfei.yes@...il.com> To: Wanpeng Li <liwanp@...ux.vnet.ibm.com> CC: Andrew Morton <akpm@...ux-foundation.org>, Wen Congyang <wency@...fujitsu.com>, Tang Chen <tangchen@...fujitsu.com>, Toshi Kani <toshi.kani@...com>, isimatu.yasuaki@...fujitsu.com, Linux MM <linux-mm@...ck.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Zhang Yanfei <zhangyanfei@...fujitsu.com> Subject: Re: [PATCH 2/2] mm/sparsemem: Fix a bug in free_map_bootmem when CONFIG_SPARSEMEM_VMEMMAP Hello wanpeng, On 10/05/2013 01:54 PM, Wanpeng Li wrote: > Hi Yanfei, > On Thu, Oct 03, 2013 at 11:32:02AM +0800, Zhang Yanfei wrote: >> From: Zhang Yanfei <zhangyanfei@...fujitsu.com> >> >> We pass the number of pages which hold page structs of a memory >> section to function free_map_bootmem. This is right when >> !CONFIG_SPARSEMEM_VMEMMAP but wrong when CONFIG_SPARSEMEM_VMEMMAP. >> When CONFIG_SPARSEMEM_VMEMMAP, we should pass the number of pages >> of a memory section to free_map_bootmem. >> >> So the fix is removing the nr_pages parameter. When >> CONFIG_SPARSEMEM_VMEMMAP, we directly use the prefined marco >> PAGES_PER_SECTION in free_map_bootmem. When !CONFIG_SPARSEMEM_VMEMMAP, >> we calculate page numbers needed to hold the page structs for a >> memory section and use the value in free_map_bootmem. >> >> Signed-off-by: Zhang Yanfei <zhangyanfei@...fujitsu.com> >> --- >> mm/sparse.c | 17 +++++++---------- >> 1 files changed, 7 insertions(+), 10 deletions(-) >> >> diff --git a/mm/sparse.c b/mm/sparse.c >> index fbb9dbc..908c134 100644 >> --- a/mm/sparse.c >> +++ b/mm/sparse.c >> @@ -603,10 +603,10 @@ static void __kfree_section_memmap(struct page *memmap) >> vmemmap_free(start, end); >> } >> #ifdef CONFIG_MEMORY_HOTREMOVE >> -static void free_map_bootmem(struct page *memmap, unsigned long nr_pages) >> +static void free_map_bootmem(struct page *memmap) >> { >> unsigned long start = (unsigned long)memmap; >> - unsigned long end = (unsigned long)(memmap + nr_pages); >> + unsigned long end = (unsigned long)(memmap + PAGES_PER_SECTION); >> >> vmemmap_free(start, end); >> } >> @@ -648,11 +648,13 @@ static void __kfree_section_memmap(struct page *memmap) >> } >> >> #ifdef CONFIG_MEMORY_HOTREMOVE >> -static void free_map_bootmem(struct page *memmap, unsigned long nr_pages) >> +static void free_map_bootmem(struct page *memmap) >> { >> unsigned long maps_section_nr, removing_section_nr, i; >> unsigned long magic; >> struct page *page = virt_to_page(memmap); >> + unsigned long nr_pages = get_order(sizeof(struct page) * >> + PAGES_PER_SECTION); > > Why replace PAGE_ALIGN(XXX) >> PAGE_SHIFT by get_order(XXX)? This will result > in memory leak. oops... I will correct this by sending a new version. Thanks. -- Thanks. Zhang Yanfei -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists