lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <80036eed-993d-1d24-7ab6-e495f01b1caa@oracle.com>
Date:   Mon, 1 Jul 2019 20:15:50 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Mel Gorman <mgorman@...e.de>
Cc:     Vlastimil Babka <vbabka@...e.cz>, Michal Hocko <mhocko@...nel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Johannes Weiner <hannes@...xchg.org>
Subject: Re: [Question] Should direct reclaim time be bounded?

On 7/1/19 1:59 AM, Mel Gorman wrote:
> On Fri, Jun 28, 2019 at 11:20:42AM -0700, Mike Kravetz wrote:
>> On 4/24/19 7:35 AM, Vlastimil Babka wrote:
>>> On 4/23/19 6:39 PM, Mike Kravetz wrote:
>>>>> That being said, I do not think __GFP_RETRY_MAYFAIL is wrong here. It
>>>>> looks like there is something wrong in the reclaim going on.
>>>>
>>>> Ok, I will start digging into that.  Just wanted to make sure before I got
>>>> into it too deep.
>>>>
>>>> BTW - This is very easy to reproduce.  Just try to allocate more huge pages
>>>> than will fit into memory.  I see this 'reclaim taking forever' behavior on
>>>> v5.1-rc5-mmotm-2019-04-19-14-53.  Looks like it was there in v5.0 as well.
>>>
>>> I'd suspect this in should_continue_reclaim():
>>>
>>>         /* Consider stopping depending on scan and reclaim activity */
>>>         if (sc->gfp_mask & __GFP_RETRY_MAYFAIL) {
>>>                 /*
>>>                  * For __GFP_RETRY_MAYFAIL allocations, stop reclaiming if the
>>>                  * full LRU list has been scanned and we are still failing
>>>                  * to reclaim pages. This full LRU scan is potentially
>>>                  * expensive but a __GFP_RETRY_MAYFAIL caller really wants to succeed
>>>                  */
>>>                 if (!nr_reclaimed && !nr_scanned)
>>>                         return false;
>>>
>>> And that for some reason, nr_scanned never becomes zero. But it's hard
>>> to figure out through all the layers of functions :/
>>
>> I got back to looking into the direct reclaim/compaction stalls when
>> trying to allocate huge pages.  As previously mentioned, the code is
>> looping for a long time in shrink_node().  The routine
>> should_continue_reclaim() returns true perhaps more often than it should.
>>
>> As Vlastmil guessed, my debug code output below shows nr_scanned is remaining
>> non-zero for quite a while.  This was on v5.2-rc6.
>>
> 
> I think it would be reasonable to have should_continue_reclaim allow an
> exit if scanning at higher priority than DEF_PRIORITY - 2, nr_scanned is
> less than SWAP_CLUSTER_MAX and no pages are being reclaimed.

Thanks Mel,

I added such a check to should_continue_reclaim.  However, it does not
address the issue I am seeing.  In that do-while loop in shrink_node,
the scan priority is not raised (priority--).  We can enter the loop
with priority == DEF_PRIORITY and continue to loop for minutes as seen
in my previous debug output.

-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ