[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1383340291.2653.33.camel@buesod1.americas.hpqcorp.net>
Date: Fri, 01 Nov 2013 14:11:31 -0700
From: Davidlohr Bueso <davidlohr@...com>
To: KOSAKI Motohiro <kosaki.motohiro@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
Michel Lespinasse <walken@...gle.com>,
Ingo Molnar <mingo@...nel.org>, Mel Gorman <mgorman@...e.de>,
Rik van Riel <riel@...hat.com>,
Guan Xuetao <gxt@...c.pku.edu.cn>, aswin@...com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: cache largest vma
On Fri, 2013-11-01 at 16:38 -0400, KOSAKI Motohiro wrote:
> (11/1/13 4:17 PM), Davidlohr Bueso wrote:
> > While caching the last used vma already does a nice job avoiding
> > having to iterate the rbtree in find_vma, we can improve. After
> > studying the hit rate on a load of workloads and environments,
> > it was seen that it was around 45-50% - constant for a standard
> > desktop system (gnome3 + evolution + firefox + a few xterms),
> > and multiple java related workloads (including Hadoop/terasort),
> > and aim7, which indicates it's better than the 35% value documented
> > in the code.
> >
> > By also caching the largest vma, that is, the one that contains
> > most addresses, there is a steady 10-15% hit rate gain, putting
> > it above the 60% region. This improvement comes at a very low
> > overhead for a miss. Furthermore, systems with !CONFIG_MMU keep
> > the current logic.
>
> I'm slightly surprised this cache makes 15% hit. Which application
> get a benefit? You listed a lot of applications, but I'm not sure
> which is highly depending on largest vma.
Well I chose the largest vma because it gives us a greater chance of
being already cached when we do the lookup for the faulted address.
The 15% improvement was with Hadoop. According to my notes it was at
~48% with the baseline kernel and increased to ~63% with this patch.
In any case I didn't measure the rates on a per-task granularity, but at
a general system level. When a system is first booted I can see that the
mmap_cache access rate becomes the determinant factor and when adding a
workload it doesn't change much. One exception to this was a kernel
build, where we go from ~50% to ~89% hit rate on a vanilla kernel.
Thanks,
Davidlohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists