[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131103094629.GA5330@gmail.com>
Date: Sun, 3 Nov 2013 10:46:29 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Davidlohr Bueso <davidlohr@...com>
Cc: KOSAKI Motohiro <kosaki.motohiro@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
Michel Lespinasse <walken@...gle.com>,
Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
Guan Xuetao <gxt@...c.pku.edu.cn>, aswin@...com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH] mm: cache largest vma
* Davidlohr Bueso <davidlohr@...com> wrote:
> On Fri, 2013-11-01 at 16:38 -0400, KOSAKI Motohiro wrote:
> > (11/1/13 4:17 PM), Davidlohr Bueso wrote:
> >
> > > While caching the last used vma already does a nice job avoiding
> > > having to iterate the rbtree in find_vma, we can improve. After
> > > studying the hit rate on a load of workloads and environments, it
> > > was seen that it was around 45-50% - constant for a standard desktop
> > > system (gnome3 + evolution + firefox + a few xterms), and multiple
> > > java related workloads (including Hadoop/terasort), and aim7, which
> > > indicates it's better than the 35% value documented in the code.
> > >
> > > By also caching the largest vma, that is, the one that contains most
> > > addresses, there is a steady 10-15% hit rate gain, putting it above
> > > the 60% region. This improvement comes at a very low overhead for a
> > > miss. Furthermore, systems with !CONFIG_MMU keep the current logic.
> >
> > I'm slightly surprised this cache makes 15% hit. Which application get
> > a benefit? You listed a lot of applications, but I'm not sure which is
> > highly depending on largest vma.
>
> Well I chose the largest vma because it gives us a greater chance of
> being already cached when we do the lookup for the faulted address.
>
> The 15% improvement was with Hadoop. According to my notes it was at
> ~48% with the baseline kernel and increased to ~63% with this patch.
>
> In any case I didn't measure the rates on a per-task granularity, but at
> a general system level. When a system is first booted I can see that the
> mmap_cache access rate becomes the determinant factor and when adding a
> workload it doesn't change much. One exception to this was a kernel
> build, where we go from ~50% to ~89% hit rate on a vanilla kernel.
~90% during a kernel build is pretty impressive.
Still the ad-hoc nature of the caching worries me a bit - but I don't have
any better ideas myself.
[I've Cc:-ed Linus, in case he has any better ideas.]
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists