lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Mon, 03 Mar 2014 16:18:12 -0800
From:	Davidlohr Bueso <davidlohr@...com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Ingo Molnar <mingo@...nel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Michel Lespinasse <walken@...gle.com>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	KOSAKI Motohiro <kosaki.motohiro@...il.com>, aswin@...com,
	scott.norton@...com, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] mm: per-thread vma caching

On Mon, 2014-03-03 at 16:00 -0800, Andrew Morton wrote:
> On Thu, 27 Feb 2014 13:48:24 -0800 Davidlohr Bueso <davidlohr@...com> wrote:
> 
> > From: Davidlohr Bueso <davidlohr@...com>
> > 
> > This patch is a continuation of efforts trying to optimize find_vma(),
> > avoiding potentially expensive rbtree walks to locate a vma upon faults.
> > The original approach (https://lkml.org/lkml/2013/11/1/410), where the
> > largest vma was also cached, ended up being too specific and random, thus
> > further comparison with other approaches were needed. There are two things
> > to consider when dealing with this, the cache hit rate and the latency of
> > find_vma(). Improving the hit-rate does not necessarily translate in finding
> > the vma any faster, as the overhead of any fancy caching schemes can be too
> > high to consider.
> > 
> > We currently cache the last used vma for the whole address space, which
> > provides a nice optimization, reducing the total cycles in find_vma() by up
> > to 250%, for workloads with good locality. On the other hand, this simple
> > scheme is pretty much useless for workloads with poor locality. Analyzing
> > ebizzy runs shows that, no matter how many threads are running, the
> > mmap_cache hit rate is less than 2%, and in many situations below 1%.
> > 
> > The proposed approach is to replace this scheme with a small per-thread cache,
> > maximizing hit rates at a very low maintenance cost. Invalidations are
> > performed by simply bumping up a 32-bit sequence number. The only expensive
> > operation is in the rare case of a seq number overflow, where all caches that
> > share the same address space are flushed. Upon a miss, the proposed replacement
> > policy is based on the page number that contains the virtual address in
> > question. Concretely, the following results are seen on an 80 core, 8 socket
> > x86-64 box:
> > 
> > ...
> > 
> > 2) Kernel build: This one is already pretty good with the current approach
> > as we're dealing with good locality.
> > 
> > +----------------+----------+------------------+
> > | caching scheme | hit-rate | cycles (billion) |
> > +----------------+----------+------------------+
> > | baseline       | 75.28%   | 11.03            |
> > | patched        | 88.09%   | 9.31             |
> > +----------------+----------+------------------+
> 
> What is the "cycles" number here?  I'd like to believe we sped up kernel
> builds by 10% ;)
> 
> Were any overall run time improvements observable?

Weeell not too much (I wouldn't normally go measuring cycles if I could
use a benchmark instead ;). As discussed a while back, all this occurs
under the mmap_sem anyway, so while we do optimize find_vma() in more
workloads than before, it doesn't translate in better benchmark
throughput :( The same occurs if we get rid of any caching and just rely
on rbtree walks, sure the cost of find_vma() goes way up, but that
really doesn't hurt from a user perspective. Fwiw, I did see in ebizzy
perf traces find_vma goes from ~7% to ~0.4%.

> 
> > ...
> >
> > @@ -1228,6 +1229,9 @@ struct task_struct {
> >  #ifdef CONFIG_COMPAT_BRK
> >  	unsigned brk_randomized:1;
> >  #endif
> > +	/* per-thread vma caching */
> > +	u32 vmacache_seqnum;
> > +	struct vm_area_struct *vmacache[VMACACHE_SIZE];
> 
> So these are implicitly locked by being per-thread.

Yes.

> > +static inline void vmacache_invalidate(struct mm_struct *mm)
> > +{
> > +	mm->vmacache_seqnum++;
> > +
> > +	/* deal with overflows */
> > +	if (unlikely(mm->vmacache_seqnum == 0))
> > +		vmacache_flush_all(mm);
> > +}
> 
> What's the locking rule for mm->vmacache_seqnum?

Invalidations occur under the mmap_sem (writing), just like
mm->mmap_cache did.

Thanks,
Davidlohr

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ