[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2b7f401d-8041-9d64-595d-f95109a52e3b@suse.cz>
Date: Thu, 22 Oct 2020 16:52:42 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Rik van Riel <riel@...riel.com>, Hugh Dickins <hughd@...gle.com>,
Xu Yu <xuyu@...ux.alibaba.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...com,
Mel Gorman <mgorman@...e.de>,
Andrea Arcangeli <aarcange@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH] mm,thp,shmem: limit shmem THP alloc gfp_mask
On 10/22/20 4:51 PM, Vlastimil Babka wrote:
> On 10/22/20 5:48 AM, Rik van Riel wrote:
>> The allocation flags of anonymous transparent huge pages can be controlled
>> through the files in /sys/kernel/mm/transparent_hugepage/defrag, which can
>> help the system from getting bogged down in the page reclaim and compaction
>> code when many THPs are getting allocated simultaneously.
>>
>> However, the gfp_mask for shmem THP allocations were not limited by those
>> configuration settings, and some workloads ended up with all CPUs stuck
>> on the LRU lock in the page reclaim code, trying to allocate dozens of
>> THPs simultaneously.
>>
>> This patch applies the same configurated limitation of THPs to shmem
>> hugepage allocations, to prevent that from happening.
>>
>> This way a THP defrag setting of "never" or "defer+madvise" will result
>> in quick allocation failures without direct reclaim when no 2MB free
>> pages are available.
>>
>> Signed-off-by: Rik van Riel <riel@...riel.com>
>
> FTR, a patch to the same effect was sent by Xu Yu:
Hm thought I did CC, but TB ate it. sorry for the noise
> https://lore.kernel.org/r/11e1ead211eb7d141efa0eb75a46ee2096ee63f8.1603267572.git.xuyu@linux.alibaba.com
>
>> ---
>>
>> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
>> index c603237e006c..0a5b164a26d9 100644
>> --- a/include/linux/gfp.h
>> +++ b/include/linux/gfp.h
>> @@ -614,6 +614,8 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask);
>> extern void pm_restrict_gfp_mask(void);
>> extern void pm_restore_gfp_mask(void);
>>
>> +extern gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma);
>> +
>> #ifdef CONFIG_PM_SLEEP
>> extern bool pm_suspended_storage(void);
>> #else
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 9474dbc150ed..9b08ce5cc387 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -649,7 +649,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
>> * available
>> * never: never stall for any thp allocation
>> */
>> -static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
>> +gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
>> {
>> const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE);
>>
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index 537c137698f8..d1290eb508e5 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -1545,8 +1545,11 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp,
>> return NULL;
>>
>> shmem_pseudo_vma_init(&pvma, info, hindex);
>> - page = alloc_pages_vma(gfp | __GFP_COMP | __GFP_NORETRY | __GFP_NOWARN,
>> - HPAGE_PMD_ORDER, &pvma, 0, numa_node_id(), true);
>> + /* Limit the gfp mask according to THP configuration. */
>> + gfp |= __GFP_COMP | __GFP_NORETRY | __GFP_NOWARN;
>> + gfp &= alloc_hugepage_direct_gfpmask(&pvma);
>> + page = alloc_pages_vma(gfp, HPAGE_PMD_ORDER, &pvma, 0, numa_node_id(),
>> + true);
>> shmem_pseudo_vma_destroy(&pvma);
>> if (page)
>> prep_transhuge_page(page);
>>
>
Powered by blists - more mailing lists