lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131104070034.GD13030@gmail.com>
Date:	Mon, 4 Nov 2013 08:00:34 +0100
From:	Ingo Molnar <mingo@...nel.org>
To:	Davidlohr Bueso <davidlohr@...com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Hugh Dickins <hughd@...gle.com>,
	Michel Lespinasse <walken@...gle.com>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	Guan Xuetao <gxt@...c.pku.edu.cn>, aswin@...com,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH] mm: cache largest vma


* Davidlohr Bueso <davidlohr@...com> wrote:

> On Sun, 2013-11-03 at 11:12 +0100, Ingo Molnar wrote:
> > * Davidlohr Bueso <davidlohr@...com> wrote:
> > 
> > > While caching the last used vma already does a nice job avoiding
> > > having to iterate the rbtree in find_vma, we can improve. After
> > > studying the hit rate on a load of workloads and environments,
> > > it was seen that it was around 45-50% - constant for a standard
> > > desktop system (gnome3 + evolution + firefox + a few xterms),
> > > and multiple java related workloads (including Hadoop/terasort),
> > > and aim7, which indicates it's better than the 35% value documented
> > > in the code.
> > > 
> > > By also caching the largest vma, that is, the one that contains
> > > most addresses, there is a steady 10-15% hit rate gain, putting
> > > it above the 60% region. This improvement comes at a very low
> > > overhead for a miss. Furthermore, systems with !CONFIG_MMU keep
> > > the current logic.
> > > 
> > > This patch introduces a second mmap_cache pointer, which is just
> > > as racy as the first, but as we already know, doesn't matter in
> > > this context. For documentation purposes, I have also added the
> > > ACCESS_ONCE() around mm->mmap_cache updates, keeping it consistent
> > > with the reads.
> > > 
> > > Cc: Hugh Dickins <hughd@...gle.com>
> > > Cc: Michel Lespinasse <walken@...gle.com>
> > > Cc: Ingo Molnar <mingo@...nel.org>
> > > Cc: Mel Gorman <mgorman@...e.de>
> > > Cc: Rik van Riel <riel@...hat.com>
> > > Cc: Guan Xuetao <gxt@...c.pku.edu.cn>
> > > Signed-off-by: Davidlohr Bueso <davidlohr@...com>
> > > ---
> > > Please note that nommu and unicore32 arch are *untested*.
> > > 
> > > I also have a patch on top of this one that caches the most 
> > > used vma, which adds another 8-10% hit rate gain, However,
> > > since it does add a counter to the vma structure and we have
> > > to do more logic in find_vma to keep track, I was hesitant about
> > > the overhead. If folks are interested I can send that out as well.
> > 
> > Would be interesting to see.
> > 
> > Btw., roughly how many cycles/instructions do we save by increasing 
> > the hit rate, in the typical case (for example during a kernel build)?
> 
> Good point. The IPC from perf stat doesn't show any difference with or 
> without the patch -- note that this is probably the least interesting 
> one as we already get a really nice hit rate with the single mmap_cache. 
> I have yet to try it on the other workloads.

I'd be surprised if this was measureable via perf stat, unless you do the 
measurement in a really, really careful way - and even then it's easy to 
make a hard to detect mistake larger in magnitude than the measured effect 
...

An easier and more reliable measurement would be to stick 2-3 get_cycles() 
calls into the affected code and save the pure timestamps into 
task.se.statistics, and extract the timestamps via /proc/sched_debug by 
adding matching seq_printf()s to kernel/sched/debug.c. (You can clear the 
statistics by echoing 0 to /proc/<PID>/sched_debug, see 
proc_sched_set_task().)

That measurement is still subject to skid and other artifacts but 
hopefully the effect is larger than cycles fuzz - and we are interested in 
a ballpark figure in any case.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ