lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 Jul 2019 13:09:03 +0200
From:   Michal Hocko <>
To:     Mike Kravetz <>
Cc:     Mel Gorman <>,
        Mel Gorman <>, Vlastimil Babka <>,
        "" <>,
        linux-kernel <>,
        Andrea Arcangeli <>,
        Johannes Weiner <>
Subject: Re: [Question] Should direct reclaim time be bounded?

On Wed 03-07-19 16:54:35, Mike Kravetz wrote:
> On 7/3/19 2:43 AM, Mel Gorman wrote:
> > Indeed. I'm getting knocked offline shortly so I didn't give this the
> > time it deserves but it appears that part of this problem is
> > hugetlb-specific when one node is full and can enter into this continual
> > loop due to __GFP_RETRY_MAYFAIL requiring both nr_reclaimed and
> > nr_scanned to be zero.
> Yes, I am not aware of any other large order allocations consistently made
> with __GFP_RETRY_MAYFAIL.  But, I did not look too closely.  Michal believes
> that hugetlb pages allocations should use __GFP_RETRY_MAYFAIL.

Yes. The argument is that this is controlable by an admin and failures
should be prevented as much as possible. I didn't get to understand
should_continue_reclaim part of the problem but I have a strong feeling
that __GFP_RETRY_MAYFAIL handling at that layer is not correct. What
happens if it is simply removed and we rely only on the retry mechanism
from the page allocator instead? Does the success rate is reduced
Michal Hocko

Powered by blists - more mailing lists