lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 8 Jun 2022 20:17:35 +0800
From:   Miaohe Lin <linmiaohe@...wei.com>
To:     Joao Martins <joao.m.martins@...cle.com>,
        Andrew Morton <akpm@...ux-foundation.org>
CC:     <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
        Dan Williams <dan.j.williams@...el.com>
Subject: Re: [PATCH] mm/page_alloc: make calling prep_compound_head more
 reliable

On 2022/6/8 3:17, Joao Martins wrote:
> On 6/7/22 19:32, Andrew Morton wrote:
>>
>> Let's cc Joao.
>>
>> On Tue, 7 Jun 2022 22:41:57 +0800 Miaohe Lin <linmiaohe@...wei.com> wrote:
>>
>>> compound_pincount_ptr is stored at first tail page instead of second tail
>>> page now.
>>
>> "now"?  Some identifiable commit did this?
>>
> 
> I think this was in:
> 
> commit5232c63f46fd ("mm: Make compound_pincount always available")

Thanks for identifying it.

> 
>>> And if it or some other field changes again in the future, data
>>> overwritten might happen. Calling prep_compound_head() outside the loop
>>> to prevent such possible issue. No functional change intended.
>>>
>>> ...
>>>
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -6772,17 +6772,8 @@ static void __ref memmap_init_compound(struct page *head,
>>>  		__init_zone_device_page(page, pfn, zone_idx, nid, pgmap);
>>>  		prep_compound_tail(head, pfn - head_pfn);
>>>  		set_page_count(page, 0);
>>> -
>>> -		/*
>>> -		 * The first tail page stores compound_mapcount_ptr() and
>>> -		 * compound_order() and the second tail page stores
>>> -		 * compound_pincount_ptr(). Call prep_compound_head() after
>>> -		 * the first and second tail pages have been initialized to
>>> -		 * not have the data overwritten.
>>> -		 */
>>> -		if (pfn == head_pfn + 2)
>>> -			prep_compound_head(head, order);
>>>  	}
>>> +	prep_compound_head(head, order);
>>>  }
>>>  
>>>  void __ref memmap_init_zone_device(struct zone *zone,
>>
> 
> memmap_init_compound() is only called in pmem case.
> 
> The idea to make this /right after/ we initialize the offending tail pages has
> to do with @altmap case wheere struct pages are placed in PMEM and thus taking
> advantage of the likelyhood of those tail struct pages being cached given that
> we will read them right after in prep_compound_head().
> 
> I agree with the general sentiment of making this 'more resilient' to compound
> pages structure changes by moving prep_compound_head() after all tail pages are
> initialized, although I need to express a concern about this making altmap possibly
> being affected or regressed. Considering on 2M compound pages we will access first and
> second tail pages /after/ initializing 32768 struct pages, or after touching/initializing
> 256K struct pages.

Many thanks for your explanation. IIUC, the below change should be preferred?

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4c7d99ee58b4..048df5d78add 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6771,13 +6771,18 @@ static void __ref memmap_init_compound(struct page *head,
                set_page_count(page, 0);

                /*
-                * The first tail page stores compound_mapcount_ptr() and
-                * compound_order() and the second tail page stores
-                * compound_pincount_ptr(). Call prep_compound_head() after
-                * the first and second tail pages have been initialized to
-                * not have the data overwritten.
+                * The first tail page stores compound_mapcount_ptr(),
+                * compound_order() and compound_pincount_ptr(). Call
+                * prep_compound_head() after the first tail page have
+                * been initialized to not have the data overwritten.
+                *
+                * Note the idea to make this right after we initialize
+                * the offending tail pages is trying to take advantage
+                * of the likelihood of those tail struct pages being
+                * cached given that we will read them right after in
+                * prep_compound_head().
                 */
-               if (pfn == head_pfn + 2)
+               if (unlikely(pfn == head_pfn + 1))
                        prep_compound_head(head, order);
        }
 }

Or am I miss something?

Thanks!

> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ