lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 05 May 2015 11:12:52 +0200
From:	Vlastimil Babka <vbabka@...e.cz>
To:	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
	David Rientjes <rientjes@...gle.com>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Greg Thelen <gthelen@...gle.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [patch v2 for-4.0] mm, thp: really limit transparent hugepage
 allocation to local node

On 04/21/2015 09:31 AM, Aneesh Kumar K.V wrote:
> Vlastimil Babka <vbabka@...e.cz> writes:
>
>> On 25.2.2015 22:24, David Rientjes wrote:
>>>
>>>> alloc_pages_preferred_node() variant, change the exact_node() variant to pass
>>>> __GFP_THISNODE, and audit and adjust all callers accordingly.
>>>>
>>> Sounds like that should be done as part of a cleanup after the 4.0 issues
>>> are addressed.  alloc_pages_exact_node() does seem to suggest that we want
>>> exactly that node, implying __GFP_THISNODE behavior already, so it would
>>> be good to avoid having this come up again in the future.
>>
>> Oh lovely, just found out that there's alloc_pages_node which should be the
>> preferred-only version, but in fact does not differ from
>> alloc_pages_exact_node
>> in any relevant way. I agree we should do some larger cleanup for next
>> version.
>>
>>>> Also, you pass __GFP_NOWARN but that should be covered by GFP_TRANSHUGE
>>>> already. Of course, nothing guarantees that hugepage == true implies that gfp
>>>> == GFP_TRANSHUGE... but current in-tree callers conform to that.
>>>>
>>> Ah, good point, and it includes __GFP_NORETRY as well which means that
>>> this patch is busted.  It won't try compaction or direct reclaim in the
>>> page allocator slowpath because of this:
>>>
>>> 	/*
>>> 	 * GFP_THISNODE (meaning __GFP_THISNODE, __GFP_NORETRY and
>>> 	 * __GFP_NOWARN set) should not cause reclaim since the subsystem
>>> 	 * (f.e. slab) using GFP_THISNODE may choose to trigger reclaim
>>> 	 * using a larger set of nodes after it has established that the
>>> 	 * allowed per node queues are empty and that nodes are
>>> 	 * over allocated.
>>> 	 */
>>> 	if (IS_ENABLED(CONFIG_NUMA) &&
>>> 	    (gfp_mask & GFP_THISNODE) == GFP_THISNODE)
>>> 		goto nopage;
>>>
>>> Hmm.  It would be disappointing to have to pass the nodemask of the exact
>>> node that we want to allocate from into the page allocator to avoid using
>>> __GFP_THISNODE.
>>
>> Yeah.
>>
>>>
>>> There's a sneaky way around it by just removing __GFP_NORETRY from
>>> GFP_TRANSHUGE so the condition above fails and since the page allocator
>>> won't retry for such a high-order allocation, but that probably just
>>> papers over this stuff too much already.  I think what we want to do is
>>
>> Alternatively alloc_pages_exact_node() adds __GFP_THISNODE just to
>> node_zonelist() call and not to __alloc_pages() gfp_mask proper? Unless
>> __GFP_THISNODE
>> was given *also* in the incoming gfp_mask, this should give us the right
>> combination?
>> But it's also subtle....
>>
>>> cause the slab allocators to not use __GFP_WAIT if they want to avoid
>>> reclaim.
>>
>> Yes, the fewer subtle heuristics we have that include combinations of
>> flags (*cough*
>> GFP_TRANSHUGE *cough*), the better.
>>
>>> This is probably going to be a much more invasive patch than originally
>>> thought.
>>
>> Right, we might be changing behavior not just for slab allocators, but
>> also others using such
>> combination of flags.
>
> Any update on this ? Did we reach a conclusion on how to go forward here
> ?

I believe David's later version was merged already. Or what exactly are 
you asking about?

> -aneesh
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ