[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20141117213415.GU21147@sgi.com>
Date: Mon, 17 Nov 2014 15:34:15 -0600
From: Alex Thorlton <athorlton@....com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Rik van Riel <riel@...hat.com>, Andi Kleen <andi@...stfloor.org>,
Alex Thorlton <athorlton@....com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Bob Liu <lliubbo@...il.com>,
David Rientjes <rientjes@...gle.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Hugh Dickins <hughd@...gle.com>,
Ingo Molnar <mingo@...hat.com>,
Kees Cook <keescook@...omium.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Mel Gorman <mgorman@...e.de>, Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Vladimir Davydov <vdavydov@...allels.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] Convert khugepaged to a task_work function
On Fri, Oct 31, 2014 at 09:27:16PM +0100, Vlastimil Babka wrote:
> What could help would be to cache one or few free huge pages per
> zone with cache
> re-fill done asynchronously, e.g. via work queues. The cache could
> benefit fault-THP
> allocations as well. And adding some logic that if nobody uses the
> cached pages and
> memory is low, then free them. And importantly, if it's not possible
> to allocate huge
> pages for the cache, then prevent scanning for collapse candidates
> as there's no point.
> (well this is probably more complex if some nodes can allocate huge
> pages and others
> not).
I think this would be a pretty cool addition, even separately from this
effort. If we keep a page cached on each NUMA node, then we could,
theoretically, really speed up the khugepaged scans (even if we don't
move those scans to task_work), and regular THP faults. I'll add it to
my ever-growing wish list :)
- Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists