lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1c91dd62-886d-bb05-8aee-22191a8a2d8e@linux.dev>
Date:   Fri, 13 Oct 2023 17:29:19 +0800
From:   Yajun Deng <yajun.deng@...ux.dev>
To:     Mike Rapoport <rppt@...nel.org>
Cc:     David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org,
        mike.kravetz@...cle.com, muchun.song@...ux.dev,
        willy@...radead.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 2/2] mm: Init page count in reserve_bootmem_region when
 MEMINIT_EARLY


On 2023/10/13 16:48, Mike Rapoport wrote:
> On Thu, Oct 12, 2023 at 05:53:22PM +0800, Yajun Deng wrote:
>> On 2023/10/12 17:23, David Hildenbrand wrote:
>>> On 10.10.23 04:31, Yajun Deng wrote:
>>>> On 2023/10/8 16:57, Yajun Deng wrote:
>>>>>> That looks wrong. if the page count would by pure luck be 0
>>>>>> already for hotplugged memory, you wouldn't clear the reserved
>>>>>> flag.
>>>>>>
>>>>>> These changes make me a bit nervous.
>>>>> Is 'if (page_count(page) || PageReserved(page))' be safer? Or do I
>>>>> need to do something else?
>>>>>
>>>> How about the following if statement? But it needs to add more patch
>>>> like v1 ([PATCH 2/4] mm: Introduce MEMINIT_LATE context).
>>>>
>>>> It'll be safer, but more complex. Please comment...
>>>>
>>>>      if (context != MEMINIT_EARLY || (page_count(page) ||
>>>> PageReserved(page)) {
>>>>
>>> Ideally we could make initialization only depend on the context, and not
>>> check for count or the reserved flag.
>>>
>> This link is v1,
>> https://lore.kernel.org/all/20230922070923.355656-1-yajun.deng@linux.dev/
>>
>> If we could make initialization only depend on the context, I'll modify it
>> based on v1.
> Although ~20% improvement looks impressive, this is only optimization of a
> fraction of the boot time, and realistically, how much 56 msec saves from
> the total boot time when you boot a machine with 190G of RAM?


There are a lot of factors that can affect the total boot time. 56 msec 
saves may be insignificant.

But if we look at the boot log, we'll see there's a significant time jump.

before:

[    0.250334] ACPI: PM-Timer IO Port: 0x508

[    0.618994] Memory: 173413056K/199884452K available (18440K kernel 
code, 4204K rwdata, 5608K rodata, 3188K init, 17024K bss, 5499616K 
reserved, 20971520K cma-reserved)


after:

[    0.260229] software IO TLB: area num 32.

[    0.563497] Memory: 173413056K/199884452K available (18440K kernel 
code, 4204K rwdata, 5608K rodata, 3188K init, 17024K bss, 5499616K 
reserved, 20971520K cma-reserved)


Memory initialization is time consuming in the boot log.

> I still think the improvement does not justify the churn, added complexity
> and special casing of different code paths of initialization of struct pages.
>   


Because there is a loop, if the order is MAX_ORDER, the loop will run 
1024 times. The following 'if' would be safer:

'if (context != MEMINIT_EARLY || (page_count(page) || >> 
PageReserved(page)) {'

This is a foundation. We may change this when we are able to safely 
remove page init in the hotplug context one day.

So the case for the early context is only temporary.

>> @Mike,  By the way,  this code will cost more time:
>>
>>                  if (context == MEMINIT_HOTPLUG)
>>                          flags = INIT_PAGE_COUNT | INIT_PAGE_RESERVED;
>>                  __init_single_page(page, pfn, zone, nid, flags);
>>
>>
>> [    0.014999] On node 0, zone DMA32: 31679 pages in unavailable ranges
>> [    0.311560] ACPI: PM-Timer IO Port: 0x508
>>
>>
>> This code will cost less time:
>>
>>                  __init_single_page(page, pfn, zone, nid, 0);
>>                  if (context == MEMINIT_HOTPLUG) {
>>                          init_page_count(page);
>>                          __SetPageReserved(page);
>>
>> [    0.014299] On node 0, zone DMA32: 31679 pages in unavailable ranges
>> [    0.250223] ACPI: PM-Timer IO Port: 0x508
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ