lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 04 Nov 2020 16:03:36 -0800
From:   Sudarshan Rajagopalan <sudaraja@...eaurora.org>
To:     Will Deacon <will@...nel.org>,
        Catalin Marinas <catalin.marinas@....com>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Cc:     Gavin Shan <gshan@...hat.com>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Mark Rutland <mark.rutland@....com>,
        Logan Gunthorpe <logang@...tatee.com>,
        David Hildenbrand <david@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Steven Price <steven.price@....com>
Subject: Re: [PATCH v4] arm64/mm: add fallback option to allocate virtually
 contiguous memory

On 2020-10-16 11:56, Sudarshan Rajagopalan wrote:

Hello Will, Catalin,

Did you have a chance to review this patch? It is reviewed by others and 
haven't seen any Nacks. This patch will be useful to have so that memory 
hotremove doesn't fail when such PMD_SIZE pages aren't available.. which 
is usually the case in low RAM devices.

> When section mappings are enabled, we allocate vmemmap pages from
> physically continuous memory of size PMD_SIZE using
> vmemmap_alloc_block_buf(). Section mappings are good to reduce TLB
> pressure. But when system is highly fragmented and memory blocks are
> being hot-added at runtime, its possible that such physically 
> continuous
> memory allocations can fail. Rather than failing the memory hot-add
> procedure, add a fallback option to allocate vmemmap pages from
> discontinuous pages using vmemmap_populate_basepages().
> 
> Signed-off-by: Sudarshan Rajagopalan <sudaraja@...eaurora.org>
> Reviewed-by: Gavin Shan <gshan@...hat.com>
> Reviewed-by: Anshuman Khandual <anshuman.khandual@....com>
> Cc: Catalin Marinas <catalin.marinas@....com>
> Cc: Will Deacon <will@...nel.org>
> Cc: Anshuman Khandual <anshuman.khandual@....com>
> Cc: Mark Rutland <mark.rutland@....com>
> Cc: Logan Gunthorpe <logang@...tatee.com>
> Cc: David Hildenbrand <david@...hat.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Steven Price <steven.price@....com>
> ---
>  arch/arm64/mm/mmu.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 75df62fea1b6..44486fd0e883 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1121,8 +1121,11 @@ int __meminit vmemmap_populate(unsigned long
> start, unsigned long end, int node,
>  			void *p = NULL;
> 
>  			p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap);
> -			if (!p)
> -				return -ENOMEM;
> +			if (!p) {
> +				if (vmemmap_populate_basepages(addr, next, node, altmap))
> +					return -ENOMEM;
> +				continue;
> +			}
> 
>  			pmd_set_huge(pmdp, __pa(p), __pgprot(PROT_SECT_NORMAL));
>  		} else

-- 
Sudarshan

--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ