lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150413164949.GF6040@gmail.com>
Date:	Mon, 13 Apr 2015 18:49:49 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	rusty@...tcorp.com.au, mathieu.desnoyers@...icios.com,
	oleg@...hat.com, paulmck@...ux.vnet.ibm.com,
	torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
	andi@...stfloor.org, rostedt@...dmis.org, tglx@...utronix.de,
	laijs@...fujitsu.com, linux@...izon.com
Subject: Re: [PATCH v5 07/10] module: Optimize __module_address() using a
 latched RB-tree


* Peter Zijlstra <peterz@...radead.org> wrote:

> Currently __module_address() is using a linear search through all
> modules in order to find the module corresponding to the provided
> address. With a lot of modules this can take a lot of time.
>
> One of the users of this is kernel_text_address() which is employed 
> in many stack unwinders; which in turn are used by perf-callchain 
> and ftrace (possibly from NMI context).
> 
> So by optimizing __module_address() we optimize many stack unwinders 
> which are used by both perf and tracing in performance sensitive 
> code.

So my (rather typical) workstation has 116 modules loaded currently - 
but setups using in excess of 150 modules are not uncommon either.

A linear list walk of 100-150 entries for every single call chain 
entry that hits some module, in 'perf record -g', can cause some 
overhead!

> +	/*
> +	 * If this is non-NULL, vfree after init() returns.

s/vfree/vfree()

> +	/*
> +	 * We want mtn_core::{mod,node[0]} to be in the same cacheline as the
> +	 * above entries such that a regular lookup will only touch the one
> +	 * cacheline.

s/touch the one cacheline
 /touch one cacheline

?

> +static __always_inline int
> +mod_tree_comp(void *key, struct latch_tree_node *n)
> +{
> +	unsigned long val = (unsigned long)key;
> +	unsigned long start, end;
> +
> +	end = start = __mod_tree_val(n);
> +	end += __mod_tree_size(n);
> +
> +	if (val < start)
> +		return -1;
> +
> +	if (val >= end)
> +		return 1;
> +
> +	return 0;

So since we are counting nanoseconds, I suspect this could be written 
more optimally as:

{
	unsigned long val = (unsigned long)key;
	unsigned long start, end;

	start = __mod_tree_val(n);
	if (val < start)
		return -1;

	end = start + __mod_tree_size(n);
	if (val >= end)
		return 1;

	return 0;
}

right?

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ