lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <03d09def-2509-4e87-ad14-cf616ac90908@linux.alibaba.com>
Date: Tue, 17 Dec 2024 15:31:36 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Barry Song <21cnbao@...il.com>, yangge1116@....com
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, stable@...r.kernel.org, david@...hat.com,
 vbabka@...e.cz, liuzixing@...on.cn
Subject: Re: [PATCH V6] mm, compaction: don't use ALLOC_CMA in long term GUP
 flow



On 2024/12/17 14:14, Barry Song wrote:
> On Tue, Dec 17, 2024 at 4:33 PM <yangge1116@....com> wrote:
>>
>> From: yangge <yangge1116@....com>
>>
>> Since commit 984fdba6a32e ("mm, compaction: use proper alloc_flags
>> in __compaction_suitable()") allow compaction to proceed when free
>> pages required for compaction reside in the CMA pageblocks, it's
>> possible that __compaction_suitable() always returns true, and in
>> some cases, it's not acceptable.
>>
>> There are 4 NUMA nodes on my machine, and each NUMA node has 32GB
>> of memory. I have configured 16GB of CMA memory on each NUMA node,
>> and starting a 32GB virtual machine with device passthrough is
>> extremely slow, taking almost an hour.
>>
>> During the start-up of the virtual machine, it will call
>> pin_user_pages_remote(..., FOLL_LONGTERM, ...) to allocate memory.
>> Long term GUP cannot allocate memory from CMA area, so a maximum
>> of 16 GB of no-CMA memory on a NUMA node can be used as virtual
>> machine memory. Since there is 16G of free CMA memory on the NUMA
> 
> Other unmovable allocations, like dma_buf, which can be large in a
> Linux system, are
> also unable to allocate memory from CMA. My question is whether the issue you
> described applies to these allocations as well.
> 
>> node, watermark for order-0 always be met for compaction, so
>> __compaction_suitable() always returns true, even if the node is
>> unable to allocate non-CMA memory for the virtual machine.
>>
>> For costly allocations, because __compaction_suitable() always
>> returns true, __alloc_pages_slowpath() can't exit at the appropriate
>> place, resulting in excessively long virtual machine startup times.
>> Call trace:
>> __alloc_pages_slowpath
>>      if (compact_result == COMPACT_SKIPPED ||
>>          compact_result == COMPACT_DEFERRED)
>>          goto nopage; // should exit __alloc_pages_slowpath() from here
>>
> 
> Do we face the same issue if we allocate dma-buf while CMA has plenty
> of free memory, but non-CMA has none?
> 
>> In order to quickly fall back to remote node, we should remove
>> ALLOC_CMA both in __compaction_suitable() and __isolate_free_page()
>> in long term GUP flow. After this fix, starting a 32GB virtual machine
>> with device passthrough takes only a few seconds.
>>
>> Fixes: 984fdba6a32e ("mm, compaction: use proper alloc_flags in __compaction_suitable()")
>> Cc: <stable@...r.kernel.org>
>> Signed-off-by: yangge <yangge1116@....com>
>> Reviewed-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>> ---
>>
>> V6:
>> -- update cc->alloc_flags to keep the original loginc
>>
>> V5:
>> - add 'alloc_flags' parameter for __isolate_free_page()
>> - remove 'usa_cma' variable
>>
>> V4:
>> - rich the commit log description
>>
>> V3:
>> - fix build errors
>> - add ALLOC_CMA both in should_continue_reclaim() and compaction_ready()
>>
>> V2:
>> - using the 'cc->alloc_flags' to determin if 'ALLOC_CMA' is needed
>> - rich the commit log description
>>
>>   include/linux/compaction.h |  6 ++++--
>>   mm/compaction.c            | 26 +++++++++++++++-----------
>>   mm/internal.h              |  3 ++-
>>   mm/page_alloc.c            |  7 +++++--
>>   mm/page_isolation.c        |  3 ++-
>>   mm/page_reporting.c        |  2 +-
>>   mm/vmscan.c                |  4 ++--
>>   7 files changed, 31 insertions(+), 20 deletions(-)
>>
>> diff --git a/include/linux/compaction.h b/include/linux/compaction.h
>> index e947764..b4c3ac3 100644
>> --- a/include/linux/compaction.h
>> +++ b/include/linux/compaction.h
>> @@ -90,7 +90,8 @@ extern enum compact_result try_to_compact_pages(gfp_t gfp_mask,
>>                  struct page **page);
>>   extern void reset_isolation_suitable(pg_data_t *pgdat);
>>   extern bool compaction_suitable(struct zone *zone, int order,
>> -                                              int highest_zoneidx);
>> +                                              int highest_zoneidx,
>> +                                              unsigned int alloc_flags);
>>
>>   extern void compaction_defer_reset(struct zone *zone, int order,
>>                                  bool alloc_success);
>> @@ -108,7 +109,8 @@ static inline void reset_isolation_suitable(pg_data_t *pgdat)
>>   }
>>
>>   static inline bool compaction_suitable(struct zone *zone, int order,
>> -                                                     int highest_zoneidx)
>> +                                                     int highest_zoneidx,
>> +                                                     unsigned int alloc_flags)
>>   {
>>          return false;
>>   }
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 07bd227..d92ba6c 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -655,7 +655,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>>
>>                  /* Found a free page, will break it into order-0 pages */
>>                  order = buddy_order(page);
>> -               isolated = __isolate_free_page(page, order);
>> +               isolated = __isolate_free_page(page, order, cc->alloc_flags);
>>                  if (!isolated)
>>                          break;
>>                  set_page_private(page, order);
>> @@ -1634,7 +1634,7 @@ static void fast_isolate_freepages(struct compact_control *cc)
>>
>>                  /* Isolate the page if available */
>>                  if (page) {
>> -                       if (__isolate_free_page(page, order)) {
>> +                       if (__isolate_free_page(page, order, cc->alloc_flags)) {
>>                                  set_page_private(page, order);
>>                                  nr_isolated = 1 << order;
>>                                  nr_scanned += nr_isolated - 1;
>> @@ -2381,6 +2381,7 @@ static enum compact_result compact_finished(struct compact_control *cc)
>>
>>   static bool __compaction_suitable(struct zone *zone, int order,
>>                                    int highest_zoneidx,
>> +                                 unsigned int alloc_flags,
>>                                    unsigned long wmark_target)
>>   {
>>          unsigned long watermark;
>> @@ -2395,25 +2396,26 @@ static bool __compaction_suitable(struct zone *zone, int order,
>>           * even if compaction succeeds.
>>           * For costly orders, we require low watermark instead of min for
>>           * compaction to proceed to increase its chances.
>> -        * ALLOC_CMA is used, as pages in CMA pageblocks are considered
>> -        * suitable migration targets
>> +        * In addition to long term GUP flow, ALLOC_CMA is used, as pages in
>> +        * CMA pageblocks are considered suitable migration targets
> 
> I'm not sure if this document is correct for cases other than GUP.

Yes, we should update the document for other cases where CMA cannot be 
used. That's why we use the passed 'alloc_flags' to determine if 
'ALLOC_CMA' is needed, instead of using 'current->flags & PF_MEMALLOC_PIN'.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ