lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200609151330.GL3127@techsingularity.net>
Date:   Tue, 9 Jun 2020 16:13:30 +0100
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Baoquan He <bhe@...hat.com>
Cc:     Jaewon Kim <jaewon31.kim@...sung.com>, minchan@...nel.org,
        mgorman@...e.de, hannes@...xchg.org, akpm@...ux-foundation.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        jaewon31.kim@...il.com, ytk.lee@...sung.com,
        cmlaika.kim@...sung.com
Subject: Re: [PATCH] page_alloc: consider highatomic reserve in wmartermark
 fast

On Tue, Jun 09, 2020 at 10:27:47PM +0800, Baoquan He wrote:
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 13cc653122b7..00869378d387 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3553,6 +3553,11 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
> >  {
> >  	long free_pages = zone_page_state(z, NR_FREE_PAGES);
> >  	long cma_pages = 0;
> > +	long highatomic = 0;
> > +	const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM));
> > +
> > +	if (likely(!alloc_harder))
> > +		highatomic = z->nr_reserved_highatomic;
> >  
> >  #ifdef CONFIG_CMA
> >  	/* If allocation can't use CMA areas don't use free CMA pages */
> > @@ -3567,8 +3572,12 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
> >  	 * the caller is !atomic then it'll uselessly search the free
> >  	 * list. That corner case is then slower but it is harmless.
> >  	 */
> > -	if (!order && (free_pages - cma_pages) > mark + z->lowmem_reserve[classzone_idx])
> > -		return true;
> > +	if (!order) {
> > +		long fast_free = free_pages - cma_pages - highatomic;
> > +
> > +		if (fast_free > mark + z->lowmem_reserve[classzone_idx])
> 
> This looks reasonable to me. However, this change may not be rebased on
> top of the latest mainline tree or mm tree. E.g in this commit 97a225e69a1f8
> ("mm/page_alloc: integrate classzone_idx and high_zoneidx"), classzone_idx
> has been changed to highest_zoneidx.
> 

That's fine, I simply wanted to illustrate where I thought the check
should go to minimise the impact to the majority of allocations.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ