lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 24 Aug 2015 08:58:09 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Rasmus Villemoes <linux@...musvillemoes.dk>
Cc:	George Spelvin <linux@...izon.com>, dave@...1.net,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	peterz@...radead.org, riel@...hat.com, rientjes@...gle.com,
	torvalds@...ux-foundation.org
Subject: Re: [PATCH 3/3 v3] mm/vmalloc: Cache the vmalloc memory info


* Rasmus Villemoes <linux@...musvillemoes.dk> wrote:

> On Sun, Aug 23 2015, Ingo Molnar <mingo@...nel.org> wrote:
> 
> > Ok, fair enough - so how about the attached approach instead, which
> > uses a 64-bit generation counter to track changes to the vmalloc
> > state.
> 
> How does this invalidation approach compare to the jiffies approach? In
> other words, how often does the vmalloc info actually change (or rather,
> in this approximation, how often is vmap_area_lock taken)? In
> particular, does it also solve the problem with git's test suite and
> similar situations with lots of short-lived processes?

The two approaches are pretty similar, and in a typical distro with typical 
workload vmalloc() is mostly a boot time affair.

But vmalloc() can be used more often in certain corner cases - neither of the 
patches makes that in any way slower, just the optimization won't trigger that 
often.

Since vmalloc() use is suboptimal for several reasons (it does not use large pages 
for kernel space allocations, etc.), this is all pretty OK IMHO.

> > ==============================>
> > From f9fd770e75e2edb4143f32ced0b53d7a77969c94 Mon Sep 17 00:00:00 2001
> > From: Ingo Molnar <mingo@...nel.org>
> > Date: Sat, 22 Aug 2015 12:28:01 +0200
> > Subject: [PATCH] mm/vmalloc: Cache the vmalloc memory info
> >
> > Linus reported that glibc (rather stupidly) reads /proc/meminfo
> > for every sysinfo() call,
> 
> Not quite: It is done by the two functions get_{av,}phys_pages
> functions; and get_phys_pages is called (once per process) by glibc's
> qsort implementation. In fact, sysinfo() is (at least part of) the cure,
> not the disease. Whether qsort should care about the total amount of
> memory is another discussion.
> 
> <http://thread.gmane.org/gmane.comp.lib.glibc.alpha/54342/focus=54558>

Thanks, is the fixed up changelog below better?

	Ingo

===============>

mm/vmalloc: Cache the vmalloc memory info

Linus reported that for scripting-intense workloads such as the
Git build, glibc's qsort will read /proc/meminfo for every process
created (by way of get_phys_pages()), which causes the Git build 
to generate a surprising amount of kernel overhead.

A fair chunk of the overhead is due to get_vmalloc_info() - which 
walks a potentially long list to do its statistics.

Modify Linus's jiffies based patch to use generation counters
to cache the vmalloc info: vmap_unlock() increases the generation
counter, and the get_vmalloc_info() reads it and compares it
against a cached generation counter.

Also use a seqlock to make sure we always print a consistent
set of vmalloc statistics.

Reported-by: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Rik van Riel <riel@...hat.com>
Cc: linux-mm@...ck.org
Signed-off-by: Ingo Molnar <mingo@...nel.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ