[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1206291543360.17044@chino.kir.corp.google.com>
Date: Fri, 29 Jun 2012 15:50:31 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Petr Holasek <pholasek@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Chris Wright <chrisw@...s-sol.org>,
Izik Eidus <izik.eidus@...ellosystems.com>,
Rik van Riel <riel@...hat.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Anton Arapov <anton@...hat.com>
Subject: Re: [PATCH v2] KSM: numa awareness sysfs knob
On Fri, 29 Jun 2012, Andrew Morton wrote:
> > I've tested this patch on numa machines with 2, 4 and 8 nodes and
> > measured speed of memory access inside of KVM guests with memory pinned
> > to one of nodes with this benchmark:
> >
> > http://pholasek.fedorapeople.org/alloc_pg.c
> >
> > Population standard deviations of access times in percentage of average
> > were following:
> >
> > merge_nodes=1
> > 2 nodes 1.4%
> > 4 nodes 1.6%
> > 8 nodes 1.7%
> >
> > merge_nodes=0
> > 2 nodes 1%
> > 4 nodes 0.32%
> > 8 nodes 0.018%
>
> ooh, numbers! Thanks.
>
Ok, the standard deviation increases when merging pages from nodes with
remote distance, that makes sense. But if that's true, then you would
restrict either the entire application to local memory with mempolicies or
cpusets, or you would use mbind() to restrict this memory to that set of
nodes already so that accesses, even with ksm merging, would have
affinity.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists