lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <83bc1070-2eb4-4fac-aecf-9cc407003ca2@linux.alibaba.com>
Date: Mon, 19 Feb 2024 10:55:59 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Zi Yan <ziy@...dia.com>, Vlastimil Babka <vbabka@...e.cz>
Cc: akpm@...ux-foundation.org, mgorman@...hsingularity.net,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: compaction: limit the suitable target page order
 to be less than cc->order



On 2024/2/12 23:00, Zi Yan wrote:
> On 12 Feb 2024, at 4:13, Vlastimil Babka wrote:
> 
>> On 1/22/24 14:01, Baolin Wang wrote:
>>> It can not improve the fragmentation if we isolate the target free pages
>>> exceeding cc->order, especially when the cc->order is less than pageblock_order.
>>> For example, suppose the pageblock_order is MAX_ORDER (size is 4M) and cc->order
>>> is 2M THP size, we should not isolate other 2M free pages to be the migration
>>> target, which can not improve the fragmentation.
>>>
>>> Moreover this is also applicable for large folio compaction.
>>
>> So why not Cc: Zi Yan? (done)
>>
> 
> Thanks.
> 
> Hi Baolin,
> 
> How often do you see this happening?

This is theoretically analyzed from the code inspection.

>>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>>
>> I doubt this will make much difference, because if such a larger order free
>> page exists, we shouldn't have a reason to be compacting for a lower order
>> in the first place?
> 
> Unless kswapd gets us such a free block in the background right after
> get_page_from_freelist() and before compaction finishes in the allocation
> slow path.
> 
> If this happens often and cc->order is not -1, it might be better to stop
> compaction and get_page_from_freelist() to save cycles on unnecessary pfn
> scanning. For completeness, when cc->order == -1, the logic does not change.

Yes, this is one possible case. There are also some other concurrent 
scenarios, such as when compaction is running (after 
compaction_suitable()), at the same time, other applications release a 
large folio to the free list. In this case, the free large folio 
scanning should also be avoided.

>>> ---
>>>   mm/compaction.c | 4 +++-
>>>   1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>> index 27ada42924d5..066b72b3471a 100644
>>> --- a/mm/compaction.c
>>> +++ b/mm/compaction.c
>>> @@ -1346,12 +1346,14 @@ static bool suitable_migration_target(struct compact_control *cc,
>>>   {
>>>   	/* If the page is a large free page, then disallow migration */
>>>   	if (PageBuddy(page)) {
>>> +		int order = cc->order > 0 ? cc->order : pageblock_order;
>>> +
>>>   		/*
>>>   		 * We are checking page_order without zone->lock taken. But
>>>   		 * the only small danger is that we skip a potentially suitable
>>>   		 * pageblock, so it's not worth to check order for valid range.
>>>   		 */
>>> -		if (buddy_order_unsafe(page) >= pageblock_order)
>>> +		if (buddy_order_unsafe(page) >= order)
>>>   			return false;
>>>   	}
>>>
> 
> 
> --
> Best Regards,
> Yan, Zi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ