[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1328899148.25989.38.camel@laptop>
Date: Fri, 10 Feb 2012 19:39:08 +0100
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Gilad Ben-Yossef <gilad@...yossef.com>
Cc: Chris Metcalf <cmetcalf@...era.com>,
Frederic Weisbecker <fweisbec@...il.com>,
linux-kernel@...r.kernel.org, Christoph Lameter <cl@...ux.com>,
linux-mm@...ck.org, Pekka Enberg <penberg@...nel.org>,
Matt Mackall <mpm@...enic.com>,
Sasha Levin <levinsasha928@...il.com>,
Rik van Riel <riel@...hat.com>,
Andi Kleen <andi@...stfloor.org>, Mel Gorman <mel@....ul.ie>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
Avi Kivity <avi@...hat.com>,
Michal Nazarewicz <mina86@...a86.com>,
Kosaki Motohiro <kosaki.motohiro@...il.com>,
Milton Miller <miltonm@....com>
Subject: Re: [v7 0/8] Reduce cross CPU IPI interference
On Sun, 2012-02-05 at 13:46 +0200, Gilad Ben-Yossef wrote:
> > /*
> > * Cause all memory mappings to be populated in the page table.
> > * Specifying this when entering dataplane mode ensures that no future
> > * page fault events will occur to cause interrupts into the Linux
> > * kernel, as long as no new mappings are installed by mmap(), etc.
> > * Note that since the hardware TLB is of finite size, there will
> > * still be the potential for TLB misses that the hypervisor handles,
> > * either via its software TLB cache (fast path) or by walking the
> > * kernel page tables (slow path), so touching large amounts of memory
> > * will still incur hypervisor interrupt overhead.
> > */
> > #define DP_POPULATE 0x8
>
> hmm... I've probably missed something, but doesn't this replicate
> mlockall (MCL_CURRENT|MCL_FUTURE) ?
Never use mlockall() its a sign you're doing it wrong, also his comment
seems to imply MCL_FUTURE isn't required.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists