lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 10 Sep 2020 10:08:56 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Anshuman Khandual <anshuman.khandual@....com>,
        Sudarshan Rajagopalan <sudaraja@...eaurora.org>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Cc:     Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Logan Gunthorpe <logang@...tatee.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Steven Price <steven.price@....com>
Subject: Re: [PATCH] arm64/mm: add fallback option to allocate virtually
 contiguous memory

On 10.09.20 08:45, Anshuman Khandual wrote:
> Hello Sudarshan,
> 
> On 09/10/2020 11:35 AM, Sudarshan Rajagopalan wrote:
>> When section mappings are enabled, we allocate vmemmap pages from physically
>> continuous memory of size PMD_SZIE using vmemmap_alloc_block_buf(). Section> mappings are good to reduce TLB pressure. But when system is highly fragmented
>> and memory blocks are being hot-added at runtime, its possible that such
>> physically continuous memory allocations can fail. Rather than failing the
> 
> Did you really see this happen on a system ?
> 
>> memory hot-add procedure, add a fallback option to allocate vmemmap pages from
>> discontinuous pages using vmemmap_populate_basepages().
> 
> Which could lead to a mixed page size mapping in the VMEMMAP area.

Right, with gives you a slight performance hit - nobody really cares,
especially if it happens in corner cases only.

At least x86_64 (see vmemmap_populate_hugepages()) and s390x (added
recently by me) implement that behavior.

Assume you run in a virtualized environment where your hypervisor tries
to do some smart dynamic guest resizing - like monitoring the guest
memory consumption and adding more memory on demand. You much rather
want hotadd to succeed (in these corner cases) that failing just because
you weren't able to grab a huge page in one instance.

Examples include XEN balloon, Hyper-V balloon, and virtio-mem. We might
see some of these for arm64 as well (if don't already do).

> Allocation failure in vmemmap_populate() should just cleanly fail
> the memory hot add operation, which can then be retried. Why the
> retry has to be offloaded to kernel ?

(not sure what "offloaded to kernel" really means here - add_memory() is
also just triggered from the kernel) I disagree, we should try our best
to add memory and make it available, especially when short on memory
already.

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ