lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 2 Oct 2023 13:25:02 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Mike Rapoport <rppt@...nel.org>
Cc:     Yajun Deng <yajun.deng@...ux.dev>, akpm@...ux-foundation.org,
        mike.kravetz@...cle.com, muchun.song@...ux.dev,
        willy@...radead.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 2/2] mm: Init page count in reserve_bootmem_region when
 MEMINIT_EARLY

On 02.10.23 13:10, Mike Rapoport wrote:
> On Mon, Oct 02, 2023 at 10:56:51AM +0200, David Hildenbrand wrote:
>> On 02.10.23 10:47, Mike Rapoport wrote:
>>> On Mon, Oct 02, 2023 at 03:03:56PM +0800, Yajun Deng wrote:
>>>>
>>>> On 2023/10/2 02:59, Mike Rapoport wrote:
>>>>> On Fri, Sep 29, 2023 at 06:27:25PM +0800, Yajun Deng wrote:
>>>>>> On 2023/9/29 18:02, Mike Rapoport wrote:
>>>>>>>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>>>>>>>> index 06be8821d833..b868caabe8dc 100644
>>>>>>>>>> --- a/mm/page_alloc.c
>>>>>>>>>> +++ b/mm/page_alloc.c
>>>>>>>>>> @@ -1285,18 +1285,22 @@ void __free_pages_core(struct page *page, unsigned int order)
>>>>>>>>>>       	unsigned int loop;
>>>>>>>>>>       	/*
>>>>>>>>>> -	 * When initializing the memmap, __init_single_page() sets the refcount
>>>>>>>>>> -	 * of all pages to 1 ("allocated"/"not free"). We have to set the
>>>>>>>>>> -	 * refcount of all involved pages to 0.
>>>>>>>>>> +	 * When initializing the memmap, memmap_init_range sets the refcount
>>>>>>>>>> +	 * of all pages to 1 ("reserved" and "free") in hotplug context. We
>>>>>>>>>> +	 * have to set the refcount of all involved pages to 0. Otherwise,
>>>>>>>>>> +	 * we don't do it, as reserve_bootmem_region only set the refcount on
>>>>>>>>>> +	 * reserve region ("reserved") in early context.
>>>>>>>>>>       	 */
>>>>>>>>> Again, why hotplug and early init should be different?
>>>>>>>> I will add a comment that describes it will save boot time.
>>>>>>> But why do we need initialize struct pages differently at boot time vs
>>>>>>> memory hotplug?
>>>>>>> Is there a reason memory hotplug cannot have page count set to 0 just like
>>>>>>> for pages reserved at boot time?
>>>>>> This patch just save boot time in MEMINIT_EARLY. If someone finds out that
>>>>>> it can save time in
>>>>>>
>>>>>> MEMINIT_HOTPLUG, I think it can be done in another patch later. I just
>>>>>> keeping it in the same.
>>>>> But it's not the same. It becomes slower after your patch and the code that
>>>>> frees the pages for MEMINIT_EARLY and MEMINIT_HOTPLUG becomes non-uniform
>>>>> for no apparent reason.
>>>>
>>>> __free_pages_core will also be called by others, such as:
>>>> deferred_free_range, do_collection and memblock_free_late.
>>>>
>>>> We couldn't removeĀ  'if (page_count(page))' even if we set page count to 0
>>>> when MEMINIT_HOTPLUG.
>>>
>>> That 'if' breaks the invariant that __free_pages_core is always called for
>>> pages with initialized page count. Adding it may lead to subtle bugs and
>>> random memory corruption so we don't want to add it at the first place.
>>
>> As long as we have to special-case memory hotplug, we know that we are
>> always coming via generic_online_page() in that case. We could either move
>> some logic over there, or let __free_pages_core() know what it should do.
> 
> Looks like the patch rather special cases MEMINIT_EARLY, although I didn't
> check throughfully other code paths.
> Anyway, relying on page_count() to be correct in different ways for
> different callers of __free_pages_core() does not sound right to me.

Absolutely agreed.

-- 
Cheers,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ