lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <57679B57.40905@linux.vnet.ibm.com>
Date:	Mon, 20 Jun 2016 15:29:27 +0800
From:	xinhui <xinhui.pan@...ux.vnet.ibm.com>
To:	Byungchul Park <byungchul.park@....com>, peterz@...radead.org,
	mingo@...nel.org
CC:	linux-kernel@...r.kernel.org, npiggin@...e.de, walken@...gle.com,
	ak@...e.de, tglx@...elltoy.tec.linutronix.de
Subject: Re: [RFC 12/12] x86/dumpstack: Optimize save_stack_trace


On 2016年06月20日 12:55, Byungchul Park wrote:
> Currently, x86 implementation of save_stack_trace() is walking all stack
> region word by word regardless of what the trace->max_entries is.
> However, it's unnecessary to walk after already fulfilling caller's
> requirement, say, if trace->nr_entries >= trace->max_entries is true.
>
> For example, CONFIG_LOCKDEP_CROSSRELEASE implementation calls
> save_stack_trace() with max_entries = 5 frequently. I measured its
> overhead and printed its difference of sched_clock() with my QEMU x86
> machine.
>
> The latency was improved over 70% when trace->max_entries = 5.
>
[snip]

> +static int save_stack_end(void *data)
> +{
> +	struct stack_trace *trace = data;
> +	return trace->nr_entries >= trace->max_entries;
> +}
> +
>   static const struct stacktrace_ops save_stack_ops = {
>   	.stack		= save_stack_stack,
>   	.address	= save_stack_address,
then why not check the return value of ->address(), -1 indicate there is no room to store any pointer.

>   	.walk_stack	= print_context_stack,
> +	.end_walk	= save_stack_end,
>   };
>
>   static const struct stacktrace_ops save_stack_ops_nosched = {
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ