lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 11 Jan 2012 08:38:26 -0800
From:	Tejun Heo <tj@...nel.org>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	axboe@...nel.dk, mingo@...hat.com, rostedt@...dmis.org,
	teravest@...gle.com, slavapestov@...gle.com, ctalbott@...gle.com,
	dhsharp@...gle.com, linux-kernel@...r.kernel.org,
	winget@...gle.com, namhyung@...il.com,
	"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH 8/9] stacktrace: implement save_stack_trace_quick()

Hello, Frederic.

On Wed, Jan 11, 2012 at 05:26:44PM +0100, Frederic Weisbecker wrote:
> On Tue, Jan 10, 2012 at 10:28:25AM -0800, Tejun Heo wrote:
> > Implement save_stack_trace_quick() which only considers the usual
> > contexts (ie. thread and irq) and doesn't handle links between
> > different contexts - if %current is in irq context, only backtrace in
> > the irq stack is considered.
> 
> The thing I don't like is the duplication that involves not only on
> stack unwinding but also on the safety checks.

I'm not entirely convinced that this is necessary or we can just add
more features to the existing backtrace facility (and maybe make that
more efficient) and be done with it.

> > This is subset of dump_trace() done in much simpler way.  It's
> > intended to be used in hot paths where the overhead of dump_trace()
> > can be too heavy.
> 
> Is it? Have you found a measurable impact (outside the fact you record only
> one stack).

As I wrote in the head message, I haven't done comparative test yet
but in the preliminary tests the CPU overhead against memory backed
device is quite visible (roughly ~20%), so I expect it to matter.
Note that testing against memory backed device is actually relevant,
on faster SSDs, CPU is already a bottleneck.

It would be best if we can extend the existing one to cover all the
cases with acceptable overhead.  I needed to write this minimal
version anyway for comparison so it's posted together but no matter
how it turns out switching them isn't difficult.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ