lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090416182928.GB13940@Krystal>
Date:	Thu, 16 Apr 2009 14:29:28 -0400
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Frederic Weisbecker <fweisbec@...il.com>
Subject: Re: [PATCH 2/2] tracing/events/lockdep: move tracepoints within
	recursive protection

* Steven Rostedt (rostedt@...dmis.org) wrote:
> 
> On Thu, 16 Apr 2009, Peter Zijlstra wrote:
> > 
> > > That is, at thread context, you are at level 0, if an interrupt comes
> > > in, it sets you to level 1, if another interrupt comes in, it sets you to 
> > > level 2, and so on.
> > > 
> > > I guess we could add this into the irq_enter/exit sofirq_enter/exit and 
> > > nmi_enter/exit.
> > > 
> > > Thus we can have each task with a bitmask. When we start to trace, we set 
> > > the bit coresponding to the level the task is at.
> > > 
> > > Ie. in thread context, we set bit 0, if we are interrupted by a 
> > > softirq/irq/nmi, we set the level bit we are at. Hmm, we might be able to 
> > > do this via the preempt count already :-/
> > > 
> > > Just add the softirq/irq/nmi bits together.
> > > 
> > > The if the bit is already set we can dump out a warning.
> > > 
> > > I'll try that out.
> > 
> > static int *perf_swcounter_recursion_context(struct perf_cpu_context *cpuctx)
> > {
> >         if (in_nmi())
> >                 return &cpuctx->recursion[3];
> > 
> >         if (in_irq())
> >                 return &cpuctx->recursion[2];
> > 
> >         if (in_softirq())
> >                 return &cpuctx->recursion[1];
> > 
> >         return &cpuctx->recursion[0];
> > }
> > 
> > Is what I use for perf-counters.
> 
> But does that allow multi nested interrupts?
> 
> I'll try the preempt_count and let you know.
> 
> Thanks,
> 
> -- Steve
> 

In practice, I have used a "tracing nesting level" detection counter in
LTTng which drops events if they are at a nesting level of about 5 or
more if my memory serves me well. This should be enough to detect
recursive tracer calls before filling the stack while handling some of
the worse nesting we can think of (thread + softirq + irq + nmi).

And it's _really_ easy to implement, arch-independant, and has no
special corner-case.

Mathieu

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ