lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 22 Mar 2022 17:42:23 -0400
From:   Zi Yan <ziy@...dia.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        virtualization@...ts.linux-foundation.org,
        Vlastimil Babka <vbabka@...e.cz>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Eric Ren <renzhengeek@...il.com>,
        Mike Rapoport <rppt@...nel.org>,
        Oscar Salvador <osalvador@...e.de>,
        Christophe Leroy <christophe.leroy@...roup.eu>
Subject: Re: [PATCH v8 2/5] mm: page_isolation: check specified range for unmovable pages

On 22 Mar 2022, at 12:42, David Hildenbrand wrote:

> On 21.03.22 19:23, Zi Yan wrote:
>> On 21 Mar 2022, at 13:30, David Hildenbrand wrote:
>>
>>> On 17.03.22 16:37, Zi Yan wrote:
>>>> From: Zi Yan <ziy@...dia.com>
>>>>
>>>> Enable set_migratetype_isolate() to check specified sub-range for
>>>> unmovable pages during isolation. Page isolation is done
>>>> at max(MAX_ORDER_NR_PAEGS, pageblock_nr_pages) granularity, but not all
>>>> pages within that granularity are intended to be isolated. For example,
>>>> alloc_contig_range(), which uses page isolation, allows ranges without
>>>> alignment. This commit makes unmovable page check only look for
>>>> interesting pages, so that page isolation can succeed for any
>>>> non-overlapping ranges.
>>>>
>>>> Signed-off-by: Zi Yan <ziy@...dia.com>
>>>> ---
>>>>  include/linux/page-isolation.h | 10 +++++
>>>>  mm/page_alloc.c                | 13 +------
>>>>  mm/page_isolation.c            | 69 ++++++++++++++++++++--------------
>>>>  3 files changed, 51 insertions(+), 41 deletions(-)
>>>>
>>>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
>>>> index e14eddf6741a..eb4a208fe907 100644
>>>> --- a/include/linux/page-isolation.h
>>>> +++ b/include/linux/page-isolation.h
>>>> @@ -15,6 +15,16 @@ static inline bool is_migrate_isolate(int migratetype)
>>>>  {
>>>>  	return migratetype == MIGRATE_ISOLATE;
>>>>  }
>>>> +static inline unsigned long pfn_max_align_down(unsigned long pfn)
>>>> +{
>>>> +	return ALIGN_DOWN(pfn, MAX_ORDER_NR_PAGES);
>>>> +}
>>>> +
>>>> +static inline unsigned long pfn_max_align_up(unsigned long pfn)
>>>> +{
>>>> +	return ALIGN(pfn, MAX_ORDER_NR_PAGES);
>>>> +}
>>>> +
>>>>  #else
>>>>  static inline bool has_isolate_pageblock(struct zone *zone)
>>>>  {
>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>> index 6de57d058d3d..680580a40a35 100644
>>>> --- a/mm/page_alloc.c
>>>> +++ b/mm/page_alloc.c
>>>> @@ -8937,16 +8937,6 @@ void *__init alloc_large_system_hash(const char *tablename,
>>>>  }
>>>>
>>>>  #ifdef CONFIG_CONTIG_ALLOC
>>>> -static unsigned long pfn_max_align_down(unsigned long pfn)
>>>> -{
>>>> -	return ALIGN_DOWN(pfn, MAX_ORDER_NR_PAGES);
>>>> -}
>>>> -
>>>> -static unsigned long pfn_max_align_up(unsigned long pfn)
>>>> -{
>>>> -	return ALIGN(pfn, MAX_ORDER_NR_PAGES);
>>>> -}
>>>> -
>>>>  #if defined(CONFIG_DYNAMIC_DEBUG) || \
>>>>  	(defined(CONFIG_DYNAMIC_DEBUG_CORE) && defined(DYNAMIC_DEBUG_MODULE))
>>>>  /* Usage: See admin-guide/dynamic-debug-howto.rst */
>>>> @@ -9091,8 +9081,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
>>>>  	 * put back to page allocator so that buddy can use them.
>>>>  	 */
>>>>
>>>> -	ret = start_isolate_page_range(pfn_max_align_down(start),
>>>> -				       pfn_max_align_up(end), migratetype, 0);
>>>> +	ret = start_isolate_page_range(start, end, migratetype, 0);
>>>>  	if (ret)
>>>>  		return ret;
>>>
>>> Shouldn't we similarly adjust undo_isolate_page_range()? IOW, all users
>>> of pfn_max_align_down()/pfn_max_align_up(). would be gone from that file
>>> and you can move these defines into mm/page_isolation.c instead of
>>> include/linux/page-isolation.h?
>>
>> undo_isolate_page_range() faces much simpler situation, just needing
>> to unset migratetype. We can just pass pageblock_nr_pages aligned range
>> to it. For start_isolate_page_range(), start and end are also used for
>> has_unmovable_pages() for precise unmovable page identification, so
>> they cannot be pageblock_nr_pages aligned. But for readability and symmetry,
>> yes, I can change undo_isolate_page_range() too.
> Yeah, we should call both with the same range and any extension of the
> range should be handled internally.
>
> I thought about some corner cases, especially once we relax some (CMA)
> alignment thingies -- then, the CMA area might be placed at weird
> locations. I haven't checked to which degree they apply, but we should
> certainly keep them in mind whenever we're extending the isolation range.
>
> We can assume that the contig range we're allocation
> a) Belongs to a single zone
> b) Does not contain any memory holes / mmap holes
>
> Let's double check
>
>
> 1) Different zones in extended range
>
> ...   ZONE A  ][ ZONE B ....
> [ Pageblock X ][ Pageblock Y ][ Pageblock Z ]
> [        MAX_ORDER - 1       ]
>
> We can never create a higher-order page between X and Y, because they
> are in different zones. Most probably we should *not* extend the range
> to cover pageblock X in case the zones don't match.
>
> The same consideration applies to the end of the range, when extending
> the isolation range.
>
> But I wonder if we can have such a zone layout. At least
> mm/page_alloc.c:find_zone_movable_pfns_for_nodes() makes sure to always
> align the start of ZONE_MOVABLE to MAX_ORDER_NR_PAGES. I hope it applies
> to all other zones as well? :/

AFAICT, it is not. Physical page ranges are read from E820 on x86_64 and
put into memblocks, then added to zones. zone ranges are not aligned
during pgdat initialization.

>
> Anyhow, it should be easy to check when isolating/un-isolating. Only
> conditionally extend the range if the zones of both pageblocks match.
>
>
> When eventually growing MAX_ORDER_NR_PAGES further, could we be in
> trouble because we could suddenly span multiple zones with a single
> MAX_ORDER - 1 page? Then we'd have to handle that I guess.

Yes. Good catch. I need to check whether the MAX_ORDER_NR_PAGES aligned
down PFN is in the same zone as the isolation PFN. I will add the check
in the next version.

>
>
> 2) mmap holes
>
> I think that's already covered by the existing __first_valid_page()
> handling.
>
>
> So, I feel like we might have to tackle the zones issue, especially when
> extending MAX_ORDER_NR_PAGES?

Yes. Will add it in the next version.

Great thanks for pointing this out!


--
Best Regards,
Yan, Zi

Download attachment "signature.asc" of type "application/pgp-signature" (855 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ