lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87cz3zt3u6.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date:   Thu, 20 Apr 2023 15:22:57 +0800
From:   "Huang, Ying" <ying.huang@...el.com>
To:     Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc:     David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org,
        mgorman@...hsingularity.net, vbabka@...e.cz, mhocko@...e.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/page_alloc: consider pfn holes after pfn_valid() in
 __pageblock_pfn_to_page()

Baolin Wang <baolin.wang@...ux.alibaba.com> writes:

> On 4/12/2023 7:25 PM, David Hildenbrand wrote:
>> On 12.04.23 12:45, Baolin Wang wrote:
>>> Now the __pageblock_pfn_to_page() is used by set_zone_contiguous(),
>>> which checks whether the given zone contains holes, and uses pfn_valid()
>>> to check if the end pfn is valid. However pfn_valid() can not make sure
>>> the end pfn is not a hole if the size of a pageblock is larger than the
>>> size of a sub-mem_section, since the struct page getting by pfn_to_page()
>>> may represent a hole or an unusable page frame, which may cause incorrect
>>> zone contiguous is set.
>>>
>>> Though another user of pageblock_pfn_to_page() in compaction seems work
>>> well now, it is better to avoid scanning or touching these offline pfns.
>>> So like commit 2d070eab2e82 ("mm: consider zone which is not fully
>>> populated to have holes"), we should also use pfn_to_online_page() for
>>> the end pfn to make sure it is a valid pfn with usable page frame.
>>> Meanwhile the pfn_valid() for end pfn can be dropped now.
>>>
>>> Moreover we've already used pfn_to_online_page() for start pfn to make
>>> sure it is online and valid, so the pfn_valid() for the start pfn is
>>> unnecessary, drop it.
>> pageblocks are supposed to fall into a single memory section, so in
>> mos > cases, if the start is online, so is the end.
>
> Yes, the granularity of memory hotplug is a mem_section.
>
> However, suppose the pageblock order is MAX_ORDER-1, and the size of a
> sub-section is 2M, that means a pageblock will fall into 2 sub 
> mem-section, and if there is a hole in the zone, that means the 2nd
> sub mem-section can be invalid without setting subsection_map bitmap.
>
> So the start is online can make sure the end pfn of a pageblock is
> online, but a valid start pfn can not make sure the end pfn is valid
> in the bitmap of ms->usage->subsection_map.

arch_add_memory
  add_pages
    __add_pages
      sparse_add_section /* set subsection_map */

arch_add_memory() is only called by add_memory_resource() and
pagemap_range() (called add_pages() too).  In add_memory_resource(),
check_hotplug_memory_range() will enforce a strict hotplug range
alignment requirement (128 MB on x86_64).  pagemap_range() are used for
ZONE_DEVICE only.  That is, for normal memory, hotplug granularity is
much larger than 2MB.

IIUC, the situation you mentioned above is impossible.  Or do I miss
something?

Best Regards,
Huang, Ying

>> THE exception to this rule is when we have a mixture of ZONE_DEVICE
>> and ZONE_* within the same section.
>> Then, indeed the end might not be online.
>> BUT, if the end is valid (-> ZONE_DEVICE), then the zone_id will
>> differ. [let's ignore any races for now, up to this point they are
>> mostly of theoretical nature]
>> So I don't think this change actually fixes something.
>> 
>> Getting rid of the pfn_valid(start_pfn). makes sense. Replacing the 
>
> Yes, my motivation is try to optimize the __pageblock_pfn_to_page()
> which is hot when doing compaction, and I saw these pfn_valid() can be 
> dropped.
>
>> pfn_valid(end_pfn) by a pfn_to_online_page(end_pfn) could make that
>> function less efficient.
>> 
>>>
>>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>>> ---
>>> . mm/page_alloc.c | 7 +++----
>>> . 1 file changed, 3 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index d0eb280ec7e4..8076f519c572 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -1512,9 +1512,6 @@ struct page *__pageblock_pfn_to_page(unsigned
>>> long start_pfn,
>>> . . . /* end_pfn is one past the range we are checking */
>>> . . . end_pfn--;
>>> -. . if (!pfn_valid(start_pfn) || !pfn_valid(end_pfn))
>>> -. . . . return NULL;
>>> -
>>> . . . start_page = pfn_to_online_page(start_pfn);
>>> . . . if (!start_page)
>>> . . . . . return NULL;
>>> @@ -1522,7 +1519,9 @@ struct page *__pageblock_pfn_to_page(unsigned
>>> long start_pfn,
>>> . . . if (page_zone(start_page) != zone)
>>> . . . . . return NULL;
>>> -. . end_page = pfn_to_page(end_pfn);
>>> +. . end_page = pfn_to_online_page(end_pfn);
>>> +. . if (!end_page)
>>> +. . . . return NULL;
>>> . . . /* This gives a shorter code than deriving page_zone(end_page) */
>>> . . . if (page_zone_id(start_page) != page_zone_id(end_page))
>> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ