lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65e5da9c-32d1-17d7-d8c6-96cbfac23fec@oracle.com>
Date:   Tue, 7 Jun 2022 20:17:01 +0100
From:   Joao Martins <joao.m.martins@...cle.com>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Miaohe Lin <linmiaohe@...wei.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Dan Williams <dan.j.williams@...el.com>
Subject: Re: [PATCH] mm/page_alloc: make calling prep_compound_head more
 reliable

On 6/7/22 19:32, Andrew Morton wrote:
> 
> Let's cc Joao.
> 
> On Tue, 7 Jun 2022 22:41:57 +0800 Miaohe Lin <linmiaohe@...wei.com> wrote:
> 
>> compound_pincount_ptr is stored at first tail page instead of second tail
>> page now.
> 
> "now"?  Some identifiable commit did this?
> 

I think this was in:

commit5232c63f46fd ("mm: Make compound_pincount always available")

>> And if it or some other field changes again in the future, data
>> overwritten might happen. Calling prep_compound_head() outside the loop
>> to prevent such possible issue. No functional change intended.
>>
>> ...
>>
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -6772,17 +6772,8 @@ static void __ref memmap_init_compound(struct page *head,
>>  		__init_zone_device_page(page, pfn, zone_idx, nid, pgmap);
>>  		prep_compound_tail(head, pfn - head_pfn);
>>  		set_page_count(page, 0);
>> -
>> -		/*
>> -		 * The first tail page stores compound_mapcount_ptr() and
>> -		 * compound_order() and the second tail page stores
>> -		 * compound_pincount_ptr(). Call prep_compound_head() after
>> -		 * the first and second tail pages have been initialized to
>> -		 * not have the data overwritten.
>> -		 */
>> -		if (pfn == head_pfn + 2)
>> -			prep_compound_head(head, order);
>>  	}
>> +	prep_compound_head(head, order);
>>  }
>>  
>>  void __ref memmap_init_zone_device(struct zone *zone,
> 

memmap_init_compound() is only called in pmem case.

The idea to make this /right after/ we initialize the offending tail pages has
to do with @altmap case wheere struct pages are placed in PMEM and thus taking
advantage of the likelyhood of those tail struct pages being cached given that
we will read them right after in prep_compound_head().

I agree with the general sentiment of making this 'more resilient' to compound
pages structure changes by moving prep_compound_head() after all tail pages are
initialized, although I need to express a concern about this making altmap possibly
being affected or regressed. Considering on 2M compound pages we will access first and
second tail pages /after/ initializing 32768 struct pages, or after touching/initializing
256K struct pages.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ