[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120319134029.GK24602@redhat.com>
Date: Mon, 19 Mar 2012 14:40:29 +0100
From: Andrea Arcangeli <aarcange@...hat.com>
To: Avi Kivity <avi@...hat.com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, Paul Turner <pjt@...gle.com>,
Suresh Siddha <suresh.b.siddha@...el.com>,
Mike Galbraith <efault@....de>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Lai Jiangshan <laijs@...fujitsu.com>,
Dan Smith <danms@...ibm.com>,
Bharata B Rao <bharata.rao@...il.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC][PATCH 00/26] sched/numa
On Mon, Mar 19, 2012 at 01:42:08PM +0200, Avi Kivity wrote:
> Extra work, and more slowness until they get rebuilt. Why not migrate
> entire large pages?
The main problem is the double copy, first copy for migrate, second
for khugepaged. This is why we want it native over time. So it also
only stops the accesses to the pages for a shorter period of time.
> I agree with this, but it's really widespread throughout the kernel,
> from interrupts to work items to background threads. It needs to be
> solved generically (IIRC vhost has some accouting fix for a similar issue).
Exactly.
> It's the standard space/time tradeoff. Once solution wants more
> storage, the other wants more faults.
I didn't grow it much more than memcg, and at least if you boot on
NUMA hardware you'll be sure to use AutoNUMA. The fact it's in the
struct page it's an implementation detail, it'll only be allocated if
the kernel is booted on NUMA hardware later.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists