lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161012072409.GB9504@dhcp22.suse.cz>
Date:   Wed, 12 Oct 2016 09:24:09 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Minchan Kim <minchan@...nel.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Vlastimil Babka <vbabka@...e.cz>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Sangseok Lee <sangseok.lee@....com>
Subject: Re: [PATCH v2 3/4] mm: try to exhaust highatomic reserve before the
 OOM

On Wed 12-10-16 14:33:35, Minchan Kim wrote:
> It's weird to show that zone has enough free memory above min
> watermark but OOMed with 4K GFP_KERNEL allocation due to
> reserved highatomic pages. As last resort, try to unreserve
> highatomic pages again and if it has moved pages to
> non-highatmoc free list, retry reclaim once more.

Agreed with Vlastimil on the OOM report in the changelog. The above will
not tell the reader much to understand how does the situation look like
and whether the patch is really needed in his particular situation.

Few nits below but in general looks good to me

> Signed-off-by: Michal Hocko <mhocko@...e.com>
> Signed-off-by: Minchan Kim <minchan@...nel.org>
> ---
>  mm/page_alloc.c | 15 +++++++++++----
>  1 file changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 18808f392718..a7472426663f 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2080,7 +2080,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>   * intense memory pressure but failed atomic allocations should be easier
>   * to recover from than an OOM.
>   */
> -static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  {
>  	struct zonelist *zonelist = ac->zonelist;
>  	unsigned long flags;
> @@ -2088,6 +2088,7 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  	struct zone *zone;
>  	struct page *page;
>  	int order;
> +	bool ret = false;

no need to initialization, see below
>  
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
>  								ac->nodemask) {
> @@ -2136,12 +2137,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			 * may increase.
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
> -			move_freepages_block(zone, page, ac->migratetype);
> +			ret = move_freepages_block(zone, page, ac->migratetype);
>  			spin_unlock_irqrestore(&zone->lock, flags);
> -			return;
> +			return ret;
>  		}
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
> +
> +	return ret;

	return false;
>  }
>  
>  /* Remove an element from the buddy allocator from the fallback list */
> @@ -3457,8 +3460,12 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  	 * Make sure we converge to OOM if we cannot make any progress
>  	 * several times in the row.
>  	 */
> -	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
> +	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
> +		/* Before OOM, exhaust highatomic_reserve */
> +		if (unreserve_highatomic_pageblock(ac))
> +			return true;

		return unreserve_highatomic_pageblock(ac);

>  		return false;
> +	}
>  
>  	/*
>  	 * Keep reclaiming pages while there is a chance this will lead
> -- 
> 2.7.4
> 

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ