lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 23 Sep 2011 07:39:33 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 21/21] tracing: Add optional percpu buffers for
 trace_printk()

On Fri, 2011-09-23 at 13:28 +0200, Peter Zijlstra wrote:

> Of course, printk() is most useless, its too slow, and its definitely
> not NMI-safe.
> 
> If you've only got a single buffer, I don't see why you would need
> per-cpu recursion detection, just spin_try_lock() the thing and if you
> fail, bail. But that's not what the changelog said, or at least implied.

Not sure if the raw_spin_locks() had a try lock back when this was
written. The recursion detection was there from day one, and has nothing
to do with this patch. Which is why the changelog said nothing about it.
This patch only is about not having the trace_printk() cause extra
synchronization. This bit me once as trace_printk() cause a race to
disappear.


> 
> The possibility of dropping trace output like that is most worrying
> though. I can imagine myself tearing my hair out trying to make sense of
> a trace, and then wanting to kick you after finding out it lost the
> crucial bit.

Understood, but NMIs are always something to be worried about causing
strange issues.

> 
> > > better to make all of trace_printk() depend on that extra config, there
> > > is absolutely 0 point in having a broken and fully serialized trace
> > > 'fail^wfeature'.
> > 
> > Not, having per cpu buffers still doesn't allow NMIs to interrupt
> > trace_printk(). Otherwise the NMI would just corrupt the current percpu
> > buffer.
> 
> Multiple buffers sounds about right, not sure if you disable interrupts
> over the normal trace path, but ISTR you don't, so you need
> task/softirq/irq/nmi buffers per cpu.

trace_printk() does disable interrupts, so it is only an NMI issue.


> 
> Really loosing trace output is not an option, ever (aside from stuff
> falling of the end of the buffer). Reliability first, performance
> second.
> 

Fine, then I'll add (on top of this patch) a CONFIG_TRACE_PRINTK_FULL or
some such name that will create the per cpu NMI and normal buffers,
which will not miss any interrupts. I just want to limit the unused
memory for 99.99999% of users.

I could make the TRACE_PRINTK_FULL not even disable interrupts and have
the normal/softirq/irq/nmi per cpu buffers. This will be something that
would *not* be enabled in production.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ