[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141030083544.GX12538@two.firstfloor.org>
Date: Thu, 30 Oct 2014 09:35:44 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Alex Thorlton <athorlton@....com>
Cc: Andi Kleen <andi@...stfloor.org>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Bob Liu <lliubbo@...il.com>,
David Rientjes <rientjes@...gle.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Hugh Dickins <hughd@...gle.com>,
Ingo Molnar <mingo@...hat.com>,
Kees Cook <keescook@...omium.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Mel Gorman <mgorman@...e.de>, Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Vladimir Davydov <vdavydov@...allels.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] Convert khugepaged to a task_work function
>
> I suppose from the single-threaded point of view, it could be. Maybe we
It's not only for single threaded. Consider the "has to wait a long time
for a lock" problem Rik pointed out. With that multiple threads are
always better.
> could look at this a bit differently. What if we allow processes to
> choose their collapse mechanism on fork? That way, the system could
> default to using the standard khugepaged mechanism, but we could request
> that processes handle collapses themselves if we want. Overall, I don't
> think that would add too much overhead to what I've already proposed
> here, and it gives us more flexibility.
We already have too many VM tunables. Better would be to switch
automatically somehow.
I guess you could use some kind of work stealing scheduler, but these
are fairly complicated. Maybe some simpler heuristics can be found.
BTW my thinking has been usually to actually use more khugepageds to
scan large address spaces faster.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists