lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <62fc8d07474acaccb65a9e20a4ebc0127e417f05.camel@surriel.com>
Date:   Thu, 22 Oct 2020 12:40:53 -0400
From:   Rik van Riel <riel@...riel.com>
To:     Yu Xu <xuyu@...ux.alibaba.com>, Hugh Dickins <hughd@...gle.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, kernel-team@...com,
        Mel Gorman <mgorman@...e.de>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Matthew Wilcox <willy@...radead.org>
Subject: Re: [PATCH] mm,thp,shmem: limit shmem THP alloc gfp_mask

On Fri, 2020-10-23 at 00:00 +0800, Yu Xu wrote:
> On 10/22/20 11:48 AM, Rik van Riel wrote:
> > The allocation flags of anonymous transparent huge pages can be
> > controlled
> > through the files in /sys/kernel/mm/transparent_hugepage/defrag,
> > which can
> > help the system from getting bogged down in the page reclaim and
> > compaction
> > code when many THPs are getting allocated simultaneously.
> > 
> > However, the gfp_mask for shmem THP allocations were not limited by
> > those
> > configuration settings, and some workloads ended up with all CPUs
> > stuck
> > on the LRU lock in the page reclaim code, trying to allocate dozens
> > of
> > THPs simultaneously.
> > 
> > This patch applies the same configurated limitation of THPs to
> > shmem
> > hugepage allocations, to prevent that from happening.
> > 
> > This way a THP defrag setting of "never" or "defer+madvise" will
> > result
> > in quick allocation failures without direct reclaim when no 2MB
> > free
> > pages are available.
> > 
> > Signed-off-by: Rik van Riel <riel@...riel.com>
> > ---
> > 
> > diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> > index c603237e006c..0a5b164a26d9 100644
> > --- a/include/linux/gfp.h
> > +++ b/include/linux/gfp.h
> > @@ -614,6 +614,8 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask);
> >   extern void pm_restrict_gfp_mask(void);
> >   extern void pm_restore_gfp_mask(void);
> >   
> > +extern gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct
> > *vma);
> > +
> >   #ifdef CONFIG_PM_SLEEP
> >   extern bool pm_suspended_storage(void);
> >   #else
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 9474dbc150ed..9b08ce5cc387 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -649,7 +649,7 @@ static vm_fault_t
> > __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
> >    *	    available
> >    * never: never stall for any thp allocation
> >    */
> > -static inline gfp_t alloc_hugepage_direct_gfpmask(struct
> > vm_area_struct *vma)
> > +gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
> >   {
> >   	const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE);
> >   
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index 537c137698f8..d1290eb508e5 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -1545,8 +1545,11 @@ static struct page
> > *shmem_alloc_hugepage(gfp_t gfp,
> >   		return NULL;
> >   
> >   	shmem_pseudo_vma_init(&pvma, info, hindex);
> > -	page = alloc_pages_vma(gfp | __GFP_COMP | __GFP_NORETRY |
> > __GFP_NOWARN,
> > -			HPAGE_PMD_ORDER, &pvma, 0, numa_node_id(),
> > true);
> > +	/* Limit the gfp mask according to THP configuration. */
> > +	gfp |= __GFP_COMP | __GFP_NORETRY | __GFP_NOWARN;
> > +	gfp &= alloc_hugepage_direct_gfpmask(&pvma);
> 
> It is fine to reuse `alloc_hugepage_direct_gfpmask`, but
> `pvma.vm_flags & VM_HUGEPAGE` is always false here, thus,
> `vma_madvised` in `alloc_hugepage_direct_gfpmask` will always
> be false.
> 
> That is why I chose to do the gfp_mask fixup in `shmem_getpage_gfp`,
> using `sgp_huge` to indicate `vma_madvised`, although with some silly
> mistakes pointed out by you, in another mail thread.
> 
> It will be better if vma_madvised is well handled in your solution.

OK, let me send a v2 that does that!

By just passing a correct gfp_mask to shmem_alloc_and_acct_page
we can also avoid the gfp gymnastics in shmem_alloc_hugepage
that Michal rightfully objected to.

-- 
All Rights Reversed.

Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ