[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131104073640.GF13030@gmail.com>
Date: Mon, 4 Nov 2013 08:36:40 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Davidlohr Bueso <davidlohr@...com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
Michel Lespinasse <walken@...gle.com>,
Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
Guan Xuetao <gxt@...c.pku.edu.cn>,
"Chandramouleeswaran, Aswin" <aswin@...com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: cache largest vma
* Davidlohr Bueso <davidlohr@...com> wrote:
> I will look into doing the vma cache per thread instead of mm (I hadn't
> really looked at the problem like this) as well as Ingo's suggestion on
> the weighted LRU approach. However, having seen that we can cheaply and
> easily reach around ~70% hit rate in a lot of workloads, makes me wonder
> how good is good enough?
So I think it all really depends on the hit/miss cost difference. It makes
little sense to add a more complex scheme if it washes out most of the
benefits!
Also note the historic context: the _original_ mmap_cache, that I
implemented 16 years ago, was a front-line cache to a linear list walk
over all vmas (!).
This is the relevant 2.1.37pre1 code in include/linux/mm.h:
/* Look up the first VMA which satisfies addr < vm_end, NULL if none. */
static inline struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long addr)
{
struct vm_area_struct *vma = NULL;
if (mm) {
/* Check the cache first. */
vma = mm->mmap_cache;
if(!vma || (vma->vm_end <= addr) || (vma->vm_start > addr)) {
vma = mm->mmap;
while(vma && vma->vm_end <= addr)
vma = vma->vm_next;
mm->mmap_cache = vma;
}
}
return vma;
}
See that vma->vm_next iteration? It was awful - but back then most of us
had at most a couple of megs of RAM with just a few vmas. No RAM, no SMP,
no worries - the mm was really simple back then.
Today we have the vma rbtree, which is self-balancing and a lot faster
than your typical linear list walk search ;-)
So I'd _really_ suggest to first examine the assumptions behind the cache,
it being named 'cache' and it having a hit rate does in itself not
guarantee that it gives us any worthwile cost savings when put in front of
an rbtree ...
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists