lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3651bce1-f84b-4537-bc57-ef6d7460749f@126.com>
Date: Fri, 13 Dec 2024 16:43:55 +0800
From: Ge Yang <yangge1116@....com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>, akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, stable@...r.kernel.org,
 21cnbao@...il.com, david@...hat.com, vbabka@...e.cz, liuzixing@...on.cn
Subject: Re: [PATCH] mm, compaction: don't use ALLOC_CMA in long term GUP flow



在 2024/12/13 16:23, Baolin Wang 写道:
> 
> 
> On 2024/12/13 15:37, yangge1116@....com wrote:
>> From: yangge <yangge1116@....com>
>>
>> Since commit 984fdba6a32e ("mm, compaction: use proper alloc_flags
>> in __compaction_suitable()") allow compaction to proceed when free
>> pages required for compaction reside in the CMA pageblocks, it's
>> possible that __compaction_suitable() always returns true, and in
>> some cases, it's not acceptable.
>>
>> There are 4 NUMA nodes on my machine, and each NUMA node has 32GB
>> of memory. I have configured 16GB of CMA memory on each NUMA node,
>> and starting a 32GB virtual machine with device passthrough is
>> extremely slow, taking almost an hour.
>>
>> During the start-up of the virtual machine, it will call
>> pin_user_pages_remote(..., FOLL_LONGTERM, ...) to allocate memory.
>> Long term GUP cannot allocate memory from CMA area, so a maximum
>> of 16 GB of no-CMA memory on a NUMA node can be used as virtual
>> machine memory. Since there is 16G of free CMA memory on the NUMA
>> node, watermark for order-0 always be met for compaction, so
>> __compaction_suitable() always returns true, even if the node is
>> unable to allocate non-CMA memory for the virtual machine.
>>
>> For costly allocations, because __compaction_suitable() always
>> returns true, __alloc_pages_slowpath() can't exit at the appropriate
>> place, resulting in excessively long virtual machine startup times.
>> Call trace:
>> __alloc_pages_slowpath
>>      if (compact_result == COMPACT_SKIPPED ||
>>          compact_result == COMPACT_DEFERRED)
>>          goto nopage; // should exit __alloc_pages_slowpath() from here
>>
>> To sum up, during long term GUP flow, we should remove ALLOC_CMA
>> both in __compaction_suitable() and __isolate_free_page().
>>
>> Fixes: 984fdba6a32e ("mm, compaction: use proper alloc_flags in 
>> __compaction_suitable()")
>> Cc: <stable@...r.kernel.org>
>> Signed-off-by: yangge <yangge1116@....com>
>> ---
>>   mm/compaction.c | 8 +++++---
>>   mm/page_alloc.c | 4 +++-
>>   2 files changed, 8 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 07bd227..044c2247 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -2384,6 +2384,7 @@ static bool __compaction_suitable(struct zone 
>> *zone, int order,
>>                     unsigned long wmark_target)
>>   {
>>       unsigned long watermark;
>> +    bool pin;
>>       /*
>>        * Watermarks for order-0 must be met for compaction to be able to
>>        * isolate free pages for migration targets. This means that the
>> @@ -2395,14 +2396,15 @@ static bool __compaction_suitable(struct zone 
>> *zone, int order,
>>        * even if compaction succeeds.
>>        * For costly orders, we require low watermark instead of min for
>>        * compaction to proceed to increase its chances.
>> -     * ALLOC_CMA is used, as pages in CMA pageblocks are considered
>> -     * suitable migration targets
>> +     * In addition to long term GUP flow, ALLOC_CMA is used, as pages in
>> +     * CMA pageblocks are considered suitable migration targets
>>        */
>>       watermark = (order > PAGE_ALLOC_COSTLY_ORDER) ?
>>                   low_wmark_pages(zone) : min_wmark_pages(zone);
>>       watermark += compact_gap(order);
>> +    pin = !!(current->flags & PF_MEMALLOC_PIN);
>>       return __zone_watermark_ok(zone, 0, watermark, highest_zoneidx,
>> -                   ALLOC_CMA, wmark_target);
>> +                   pin ? 0 : ALLOC_CMA, wmark_target);
>>   }
> 
> Seems a little hack for me. Using the 'cc->alloc_flags' passed from the 
> caller to determin if ‘ALLOC_CMA’ is needed looks more reasonable to me.

Ok, thanks.

> 
>>   /*
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index dde19db..9a5dfda 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -2813,6 +2813,7 @@ int __isolate_free_page(struct page *page, 
>> unsigned int order)
>>   {
>>       struct zone *zone = page_zone(page);
>>       int mt = get_pageblock_migratetype(page);
>> +    bool pin;
>>       if (!is_migrate_isolate(mt)) {
>>           unsigned long watermark;
>> @@ -2823,7 +2824,8 @@ int __isolate_free_page(struct page *page, 
>> unsigned int order)
>>            * exists.
>>            */
>>           watermark = zone->_watermark[WMARK_MIN] + (1UL << order);
>> -        if (!zone_watermark_ok(zone, 0, watermark, 0, ALLOC_CMA))
>> +        pin = !!(current->flags & PF_MEMALLOC_PIN);
>> +        if (!zone_watermark_ok(zone, 0, watermark, 0, pin ? 0 : 
>> ALLOC_CMA))
>>               return 0;
>>       }


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ