lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6c58b0ef-7a9a-491d-7286-7642f9d4c7bb@redhat.com>
Date:   Fri, 29 Mar 2019 10:01:26 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Oscar Salvador <osalvador@...e.de>
Cc:     akpm@...ux-foundation.org, mhocko@...e.com,
        dan.j.williams@...el.com, Jonathan.Cameron@...wei.com,
        anshuman.khandual@....com, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [PATCH 0/4] mm,memory_hotplug: allocate memmap from hotadded
 memory

On 29.03.19 09:56, David Hildenbrand wrote:
> On 29.03.19 09:45, Oscar Salvador wrote:
>> On Thu, Mar 28, 2019 at 04:31:44PM +0100, David Hildenbrand wrote:
>>> Correct me if I am wrong. I think I was confused - vmemmap data is still
>>> allocated *per memory block*, not for the whole added memory, correct?
>>
>> No, vmemap data is allocated per memory-resource added.
>> In case a DIMM, would be a DIMM, in case a qemu memory-device, would be that
>> memory-device.
>> That is counting that ACPI does not split the DIMM/memory-device in several memory
>> resources.
>> If that happens, then acpi_memory_enable_device() calls __add_memory for every
>> memory-resource, which means that the vmemmap data will be allocated per
>> memory-resource.
>> I did not see this happening though, and I am not sure under which circumstances
>> can happen (I have to study the ACPI code a bit more).
>>
>> The problem with allocating vmemmap data per memblock, is the fragmentation.
>> Let us say you do the following:
>>
>> * memblock granularity 128M
>>
>> (qemu) object_add memory-backend-ram,id=ram0,size=256M
>> (qemu) device_add pc-dimm,id=dimm0,memdev=ram0,node=1
>>
>> This will create two memblocks (2 sections), and if we allocate the vmemmap
>> data for each corresponding section within it section(memblock), you only get
>> 126M contiguous memory.
> 
> Oh okay, so actually the way I guessed it would be now.
> 
> While this makes totally sense, I'll have to look how it is currently
> handled, meaning if there is a change. I somewhat remembering that
> delayed struct pages initialization would initialize vmmap per section,
> not per memory resource.
> 
> But as I work on 10 things differently, my mind sometimes seems to
> forget stuff in order to replace it with random nonsense. Will look into
> the details to not have to ask too many dumb questions.

s/differently/concurrently/

See, nonsense ;)

-- 

Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ