lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1699CE87DE933F49876AD744B5DC140F2312E948@dggemm526-mbx.china.huawei.com>
Date:   Wed, 22 Jul 2020 08:41:17 +0000
From:   "liwei (CM)" <liwei213@...wei.com>
To:     Mike Rapoport <rppt@...ux.ibm.com>
CC:     "catalin.marinas@....com" <catalin.marinas@....com>,
        "will@...nel.org" <will@...nel.org>,
        "Xiaqing (A)" <saberlily.xia@...ilicon.com>,
        "Chenfeng (puck)" <puck.chen@...ilicon.com>,
        butao <butao@...ilicon.com>,
        fengbaopeng <fengbaopeng2@...ilicon.com>,
        "nsaenzjulienne@...e.de" <nsaenzjulienne@...e.de>,
        "steve.capper@....com" <steve.capper@....com>,
        "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        sujunfei <sujunfei2@...ilicon.com>,
        zhaojiapeng <zhaojiapeng@...wei.com>
Subject: 答复: [PATCH] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP



-----邮件原件-----
发件人: Mike Rapoport [mailto:rppt@...ux.ibm.com] 
发送时间: 2020年7月22日 14:07
收件人: liwei (CM) <liwei213@...wei.com>
抄送: catalin.marinas@....com; will@...nel.org; Xiaqing (A) <saberlily.xia@...ilicon.com>; Chenfeng (puck) <puck.chen@...ilicon.com>; butao <butao@...ilicon.com>; fengbaopeng <fengbaopeng2@...ilicon.com>; nsaenzjulienne@...e.de; steve.capper@....com; Song Bao Hua (Barry Song) <song.bao.hua@...ilicon.com>; linux-arm-kernel@...ts.infradead.org; linux-kernel@...r.kernel.org; sujunfei <sujunfei2@...ilicon.com>
主题: Re: [PATCH] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP

Hi,

On Tue, Jul 21, 2020 at 03:32:03PM +0800, Wei Li wrote:
> For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP 
> do not free the reserved memory for the page map, this patch do it.

Are there numbers showing how much memory is actually freed?

The freeing of empty memmap would become rather complex with these changes, do the memory savings justify it?

Hi, Mike
In the sparse memory model, the size of a section is 1 GB (SECTION_SIZE_BITS 30) by default. Therefore, when the memory is less than the size of a section, that is, when there is a hole, the patch takes effect:

1) For example, the DDR size used by our platform is 8 GB, however, 3.5 ~ 4 GB is the space where the SOC register is located. Therefore, if the page map is performed on memory3 (3 ~4 GB), 512 MB/4 KB x 64 bytes = 8 MB is wasted because the page map is not required for the 3.5~4 GB space; However, the DDR memory space of 3.5~4 GB is shift to 16~16.5 GB. In this case, memory is wasted because the page map is not required for 16.5~17 GB, the patch can also save 8 MB. Therefore, the total saved memory is 16 MB.

2) In the reserved memory, some modules need to reserve a large amount of memory (no-map attr). On our platform, the modem module needs to reserve more than 256 MB memory, and the patch can save 4 MB. Actually, if the reserved memory is greater than 128 MB, the patch can free unnecessary page map memory.

It may be possible to save some waste by reducing the section size, but free the waste page map that defines VMEMMAP is another approach similar to flat memory model and sparse memory mode that does not define VMEMMAP, and makes the entire code look more complete.

If you have a better idea, I'd be happy to discuss it with you.

Thanks!

> Signed-off-by: Wei Li <liwei213@...wei.com>
> Signed-off-by: Chen Feng <puck.chen@...ilicon.com>
> Signed-off-by: Xia Qing <saberlily.xia@...ilicon.com>
> ---
>  arch/arm64/mm/init.c | 81 
> +++++++++++++++++++++++++++++++++++++++++++++-------
>  1 file changed, 71 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 
> 1e93cfc7c47a..d1b56b47d5ba 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -441,7 +441,48 @@ void __init bootmem_init(void)
>  	memblock_dump_all();
>  }
> 
> -#ifndef CONFIG_SPARSEMEM_VMEMMAP
> +#ifdef CONFIG_SPARSEMEM_VMEMMAP
> +#define VMEMMAP_PAGE_INUSE 0xFD
> +static inline void free_memmap(unsigned long start_pfn, unsigned long 
> +end_pfn) {
> +	unsigned long addr, end;
> +	unsigned long next;
> +	pmd_t *pmd;
> +	void *page_addr;
> +	phys_addr_t phys_addr;
> +
> +	addr = (unsigned long)pfn_to_page(start_pfn);
> +	end = (unsigned long)pfn_to_page(end_pfn);
> +
> +	pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
> +	for (; addr < end; addr = next, pmd++) {
> +		next = pmd_addr_end(addr, end);
> +
> +		if (!pmd_present(*pmd))
> +			continue;
> +
> +		if (IS_ALIGNED(addr, PMD_SIZE) &&
> +			IS_ALIGNED(next, PMD_SIZE)) {
> +			phys_addr = __pfn_to_phys(pmd_pfn(*pmd));
> +			free_bootmem(phys_addr, PMD_SIZE);
> +			pmd_clear(pmd);
> +		} else {
> +			/* If here, we are freeing vmemmap pages. */
> +			memset((void *)addr, VMEMMAP_PAGE_INUSE, next - addr);
> +			page_addr = page_address(pmd_page(*pmd));
> +
> +			if (!memchr_inv(page_addr, VMEMMAP_PAGE_INUSE,
> +				PMD_SIZE)) {
> +				phys_addr = __pfn_to_phys(pmd_pfn(*pmd));
> +				free_bootmem(phys_addr, PMD_SIZE);
> +				pmd_clear(pmd);
> +			}
> +		}
> +	}
> +
> +	flush_tlb_all();
> +}
> +#else
>  static inline void free_memmap(unsigned long start_pfn, unsigned long 
> end_pfn)  {
>  	struct page *start_pg, *end_pg;
> @@ -468,31 +509,53 @@ static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn)
>  		memblock_free(pg, pgend - pg);
>  }
> 
> +#endif
> +
>  /*
>   * The mem_map array can get very big. Free the unused area of the memory map.
>   */
>  static void __init free_unused_memmap(void)  {
> -	unsigned long start, prev_end = 0;
> +	unsigned long start, cur_start, prev_end = 0;
>  	struct memblock_region *reg;
> 
>  	for_each_memblock(memory, reg) {
> -		start = __phys_to_pfn(reg->base);
> +		cur_start = __phys_to_pfn(reg->base);
> 
>  #ifdef CONFIG_SPARSEMEM
>  		/*
>  		 * Take care not to free memmap entries that don't exist due
>  		 * to SPARSEMEM sections which aren't present.
>  		 */
> -		start = min(start, ALIGN(prev_end, PAGES_PER_SECTION));
> -#endif
> +		start = min(cur_start, ALIGN(prev_end, PAGES_PER_SECTION));
> +
>  		/*
> -		 * If we had a previous bank, and there is a space between the
> -		 * current bank and the previous, free it.
> +		 * Free memory in the case of:
> +		 * 1. if cur_start - prev_end <= PAGES_PER_SECTION,
> +		 * free pre_end ~ cur_start.
> +		 * 2. if cur_start - prev_end > PAGES_PER_SECTION,
> +		 * free pre_end ~ ALIGN(prev_end, PAGES_PER_SECTION).
>  		 */
>  		if (prev_end && prev_end < start)
>  			free_memmap(prev_end, start);
> 
> +		/*
> +		 * Free memory in the case of:
> +		 * if cur_start - prev_end > PAGES_PER_SECTION,
> +		 * free ALIGN_DOWN(cur_start, PAGES_PER_SECTION) ~ cur_start.
> +		 */
> +		if (cur_start > start &&
> +		    !IS_ALIGNED(cur_start, PAGES_PER_SECTION))
> +			free_memmap(ALIGN_DOWN(cur_start, PAGES_PER_SECTION),
> +				    cur_start);
> +#else
> +		/*
> +		 * If we had a previous bank, and there is a space between the
> +		 * current bank and the previous, free it.
> +		 */
> +		if (prev_end && prev_end < cur_start)
> +			free_memmap(prev_end, cur_start);
> +#endif
>  		/*
>  		 * Align up here since the VM subsystem insists that the
>  		 * memmap entries are valid from the bank end aligned to @@ -507,7 
> +570,6 @@ static void __init free_unused_memmap(void)
>  		free_memmap(prev_end, ALIGN(prev_end, PAGES_PER_SECTION));  #endif  
> }
> -#endif	/* !CONFIG_SPARSEMEM_VMEMMAP */
> 
>  /*
>   * mem_init() marks the free areas in the mem_map and tells us how 
> much memory @@ -524,9 +586,8 @@ void __init mem_init(void)
> 
>  	set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
> 
> -#ifndef CONFIG_SPARSEMEM_VMEMMAP
>  	free_unused_memmap();
> -#endif
> +
>  	/* this will put all unused low memory onto the freelists */
>  	memblock_free_all();
> 
> --
> 2.15.0
> 

--
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ