lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 6 Mar 2014 15:12:06 -0800
From:	Andrew Morton <>
To:	Johannes Weiner <>
Subject: Re: [merged]
 mm-page_alloc-reset-aging-cycle-with-gfp_thisnode-v2.patch removed from -mm

On Thu, 6 Mar 2014 18:04:04 -0500 Johannes Weiner <> wrote:

> > what bug does it fix and what are the user-visible effects??
> Ok, maybe this is better?
> ---
> GFP_THISNODE is for callers that implement their own clever fallback
> to remote nodes.  It restricts the allocation to the specified node
> and does not invoke reclaim, assuming that the caller will take care
> of it when the fallback fails, e.g. through a subsequent allocation
> request without GFP_THISNODE set.
> However, many current GFP_THISNODE users only want the node exclusive
> aspect of the flag, without actually implementing their own fallback
> or triggering reclaim if necessary.  This results in things like page
> migration failing prematurely even when there is easily reclaimable
> memory available, unless kswapd happens to be running already or a
> concurrent allocation attempt triggers the necessary reclaim.
> Convert all callsites that don't implement their own fallback strategy
> to __GFP_THISNODE.  This restricts the allocation a single node too,
> but at the same time allows the allocator to enter the slowpath, wake
> kswapd, and invoke direct reclaim if necessary, to make the allocation
> happen when memory is full.

Looks good, thanks.  I'll send this Linuswards next week.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists