lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 29 Mar 2019 14:42:43 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Oscar Salvador <osalvador@...e.de>
Cc:     David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org,
        dan.j.williams@...el.com, Jonathan.Cameron@...wei.com,
        anshuman.khandual@....com, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [PATCH 0/4] mm,memory_hotplug: allocate memmap from hotadded
 memory

On Fri 29-03-19 09:45:47, Oscar Salvador wrote:
[...]
> * memblock granularity 128M
> 
> (qemu) object_add memory-backend-ram,id=ram0,size=256M
> (qemu) device_add pc-dimm,id=dimm0,memdev=ram0,node=1
> 
> This will create two memblocks (2 sections), and if we allocate the vmemmap
> data for each corresponding section within it section(memblock), you only get
> 126M contiguous memory.
> 
> So, the taken approach is to allocate the vmemmap data corresponging to the
> whole DIMM/memory-device/memory-resource from the beginning of its memory.
> 
> In the example from above, the vmemmap data for both sections is allocated from
> the beginning of the first section:
> 
> memmap array takes 2MB per section, so 512 pfns.
> If we add 2 sections:
> 
> [  pfn#0  ]  \
> [  ...    ]  |  vmemmap used for memmap array
> [pfn#1023 ]  /  
> 
> [pfn#1024 ]  \
> [  ...    ]  |  used as normal memory
> [pfn#65536]  /
> 
> So, out of 256M, we get 252M to use as a real memory, as 4M will be used for
> building the memmap array.

Having a larger contiguous area is definitely nice to have but you also
have to consider the other side of the thing. If we have a movable
memblock with unmovable memory then we are breaking the movable
property. So there should be some flexibility for caller to tell whether
to allocate on per device or per memblock. Or we need something to move
memmaps during the hotremove.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ