lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1407151609120.32274@chino.kir.corp.google.com>
Date:	Tue, 15 Jul 2014 16:17:31 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	Dave Hansen <dave.hansen@...el.com>
cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
	Bob Liu <bob.liu@...cle.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [patch] mm, thp: only collapse hugepages to nodes with
 affinity

On Mon, 14 Jul 2014, Dave Hansen wrote:

> > +		if (node == NUMA_NO_NODE) {
> > +			node = page_to_nid(page);
> > +		} else {
> > +			int distance = node_distance(page_to_nid(page), node);
> > +
> > +			/*
> > +			 * Do not migrate to memory that would not be reclaimed
> > +			 * from.
> > +			 */
> > +			if (distance > RECLAIM_DISTANCE)
> > +				goto out_unmap;
> > +		}
> 
> Isn't the reclaim behavior based on zone_reclaim_mode and not
> RECLAIM_DISTANCE directly?  And isn't that reclaim behavior disabled by
> default?
> 

Seems that RECLAIM_DISTANCE has taken on a life of its own independent of 
zone_reclaim_mode as a heuristic, such as its use in creating sched 
domains which would be unrelated.

> I think you should at least be consulting zone_reclaim_mode.
> 

Good point, and it matches what the comment is saying about whether we'd 
actually reclaim from the remote node to allocate thp on fault or not.  
I'll add it.

After this change, we'll also need to consider the behavior of thp at 
fault and whether remote HPAGE_PMD_SIZE memory when local memory is 
low/fragmented is better than local PAGE_SIZE memory.  In my page fault 
latency testing on true NUMA machines it's convincing that it's not.

This makes me believe that, somewhat similar to this patch, when we 
allocate thp memory at fault and zone_reclaim_mode is non-zero that we 
should set only nodes with numa_node_id() <= RECLAIM_DISTANCE and then 
otherwise fallback to the PAGE_SIZE fault path.

I've been hesitant to make that exact change, though, because it's a 
systemwide setting and I really hope to avoid a prctl() that controls 
zone reclaim for a particular process.  Perhaps the NUMA balancing work 
makes this more dependable.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ