[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131219152906.GQ11295@suse.de>
Date: Thu, 19 Dec 2013 15:29:06 +0000
From: Mel Gorman <mgorman@...e.de>
To: Alex Thorlton <athorlton@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Rik van Riel <riel@...hat.com>,
Wanpeng Li <liwanp@...ux.vnet.ibm.com>,
Michel Lespinasse <walken@...gle.com>,
Benjamin LaHaise <bcrl@...ck.org>,
Oleg Nesterov <oleg@...hat.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Andy Lutomirski <luto@...capital.net>,
Al Viro <viro@...iv.linux.org.uk>,
David Rientjes <rientjes@...gle.com>,
Zhang Yanfei <zhangyanfei@...fujitsu.com>,
Peter Zijlstra <peterz@...radead.org>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>,
Jiang Liu <jiang.liu@...wei.com>,
Cody P Schafer <cody@...ux.vnet.ibm.com>,
Glauber Costa <glommer@...allels.com>,
Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
linux-kernel@...r.kernel.org,
Andrea Arcangeli <aarcange@...hat.com>
Subject: Re: [RFC PATCH 0/3] Change how we determine when to hand out THPs
On Mon, Dec 16, 2013 at 11:12:15AM -0600, Alex Thorlton wrote:
> > Please cc Andrea on this.
>
> I'm going to clean up a few small things for a v2 pretty soon, I'll be
> sure to cc Andrea there.
>
> > > My proposed solution to the problem is to allow users to set a
> > > threshold at which THPs will be handed out. The idea here is that, when
> > > a user faults in a page in an area where they would usually be handed a
> > > THP, we pull 512 pages off the free list, as we would with a regular
> > > THP, but we only fault in single pages from that chunk, until the user
> > > has faulted in enough pages to pass the threshold we've set. Once they
> > > pass the threshold, we do the necessary work to turn our 512 page chunk
> > > into a proper THP. As it stands now, if the user tries to fault in
> > > pages from different nodes, we completely give up on ever turning a
> > > particular chunk into a THP, and just fault in the 4K pages as they're
> > > requested. We may want to make this tunable in the future (i.e. allow
> > > them to fault in from only 2 different nodes).
> >
> > OK. But all 512 pages reside on the same node, yes? Whereas with thp
> > disabled those 512 pages would have resided closer to the CPUs which
> > instantiated them.
>
> As it stands right now, yes, since we're pulling a 512 page contiguous
> chunk off the free list, everything from that chunk will reside on the
> same node, but as I (stupidly) forgot to mention in my original e-mail,
> one piece I have yet to add is the functionality to put the remaining
> unfaulted pages from our chunk *back* on the free list after we give up
> on handing out a THP.
You don't necessarily have to take it off in the
first place either. Heavy handed approach is to create
MIGRATE_MOVABLE_THP_RESERVATION_BECAUSE_WHO_NEEDS_SNAPPY_NAMES and put it
at the bottom of the fallback lists in the page allocator. Allocate one
base page, move the other 511 to that list. On the second fault, use the
correctly aligned page if it's still on the buddy lists and local to the
current NUMA node, otherwise fallback to a normal allocation. On promotion,
you're checking first if all the faulted page are on the same node and
second if the correctly aligned pages are on the free lists or not.
The addition of a migrate type would very heavy handed but you could
just create a special cased linked list of pages that are potentially
reserved that is drained before the page allocator wakes kswapd.
Order the pages such that the oldest one on the new free list is the
first allocated. That way you do not have to worry about scanning tasks
for pages to put back on the free list.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists