lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <885afb7b-f5be-590a-00c8-a24d2bc65f37@oracle.com>
Date:   Wed, 10 Jul 2019 11:42:40 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Hillf Danton <hdanton@...a.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>, Michal Hocko <mhocko@...nel.org>,
        Mel Gorman <mgorman@...e.de>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Johannes Weiner <hannes@...xchg.org>
Subject: Re: [Question] Should direct reclaim time be bounded?

On 7/7/19 10:19 PM, Hillf Danton wrote:
> On Mon, 01 Jul 2019 20:15:51 -0700 Mike Kravetz wrote:
>> On 7/1/19 1:59 AM, Mel Gorman wrote:
>>>
>>> I think it would be reasonable to have should_continue_reclaim allow an
>>> exit if scanning at higher priority than DEF_PRIORITY - 2, nr_scanned is
>>> less than SWAP_CLUSTER_MAX and no pages are being reclaimed.
>>
>> Thanks Mel,
>>
>> I added such a check to should_continue_reclaim.  However, it does not
>> address the issue I am seeing.  In that do-while loop in shrink_node,
>> the scan priority is not raised (priority--).  We can enter the loop
>> with priority == DEF_PRIORITY and continue to loop for minutes as seen
>> in my previous debug output.
>>
> Does it help raise prioity in your case?

Thanks Hillf,  sorry for delay in responding I have been AFK.

I am not sure if you wanted to try this somehow in addition to Mel's
suggestion, or alone.

Unfortunately, such a change actually causes worse behavior.

> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2543,11 +2543,18 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat,
>  	unsigned long pages_for_compaction;
>  	unsigned long inactive_lru_pages;
>  	int z;
> +	bool costly_fg_reclaim = false;
>  
>  	/* If not in reclaim/compaction mode, stop */
>  	if (!in_reclaim_compaction(sc))
>  		return false;
>  
> +	/* Let compact determine what to do for high order allocators */
> +	costly_fg_reclaim = sc->order > PAGE_ALLOC_COSTLY_ORDER &&
> +				!current_is_kswapd();
> +	if (costly_fg_reclaim)
> +		goto check_compact;

This goto makes us skip the 'if (!nr_reclaimed && !nr_scanned)' test.

> +
>  	/* Consider stopping depending on scan and reclaim activity */
>  	if (sc->gfp_mask & __GFP_RETRY_MAYFAIL) {
>  		/*
> @@ -2571,6 +2578,7 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat,
>  			return false;
>  	}
>  
> +check_compact:
>  	/*
>  	 * If we have not reclaimed enough pages for compaction and the
>  	 * inactive lists are large enough, continue reclaiming

It is quite easy to hit the condition where:
nr_reclaimed == 0  && nr_scanned == 0 is true, but we skip the previous test

and the compaction check:
sc->nr_reclaimed < pages_for_compaction &&
	inactive_lru_pages > pages_for_compaction

is true, so we return true before the below check of costly_fg_reclaim

> @@ -2583,6 +2591,9 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat,
>  			inactive_lru_pages > pages_for_compaction)
>  		return true;
>  
> +	if (costly_fg_reclaim)
> +		return false;
> +
>  	/* If compaction would go ahead or the allocation would succeed, stop */
>  	for (z = 0; z <= sc->reclaim_idx; z++) {
>  		struct zone *zone = &pgdat->node_zones[z];
> --
> 

As Michal suggested, I'm going to do some testing to see what impact
dropping the __GFP_RETRY_MAYFAIL flag for these huge page allocations
will have on the number of pages allocated.
-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ