lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 4 Feb 2015 15:51:36 +0100
From:	Jiri Olsa <jolsa@...hat.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Vince Weaver <vincent.weaver@...ne.edu>, mingo@...nel.org,
	linux-kernel@...r.kernel.org, eranian@...il.com,
	mark.rutland@....com, torvalds@...ux-foundation.org,
	tglx@...utronix.de
Subject: Re: [RFC][PATCH 2/3] perf: Add a bit of paranoia

On Mon, Feb 02, 2015 at 06:32:32PM +0100, Peter Zijlstra wrote:
> On Mon, Feb 02, 2015 at 04:42:40PM +0100, Peter Zijlstra wrote:
> > On Mon, Feb 02, 2015 at 01:33:14AM -0500, Vince Weaver wrote:
> > > [407484.309136] ------------[ cut here ]------------
> 
> > > [407484.588602]  <<EOE>>  <IRQ>  [<ffffffff8115c28c>] perf_prepare_sample+0x2ec/0x3c0
> > > [407484.597358]  [<ffffffff8115c46e>] __perf_event_overflow+0x10e/0x270
> > > [407484.604708]  [<ffffffff8115c439>] ? __perf_event_overflow+0xd9/0x270
> > > [407484.612215]  [<ffffffff8115c924>] ? perf_tp_event+0xc4/0x210
> > > [407484.619000]  [<ffffffff8115cfe2>] ? __perf_sw_event+0x72/0x1f0
> > > [407484.625937]  [<ffffffff8115c799>] ? perf_swevent_overflow+0xa9/0xe0
> > > [407484.633287]  [<ffffffff8115c799>] perf_swevent_overflow+0xa9/0xe0
> > > [407484.640467]  [<ffffffff8115c837>] perf_swevent_event+0x67/0x90
> > > [407484.647343]  [<ffffffff8115c924>] perf_tp_event+0xc4/0x210
> > > [407484.653923]  [<ffffffff810b6fa9>] ? lock_acquire+0x119/0x130
> > > [407484.660606]  [<ffffffff810b3cf6>] ? perf_trace_lock_acquire+0x146/0x180
> > > [407484.668332]  [<ffffffff810b594f>] ? __lock_acquire.isra.31+0x3af/0xfe0
> > > [407484.675962]  [<ffffffff810b3cf6>] perf_trace_lock_acquire+0x146/0x180
> > > [407484.683490]  [<ffffffff810b6fa9>] ? lock_acquire+0x119/0x130
> > > [407484.690211]  [<ffffffff810b6fa9>] lock_acquire+0x119/0x130
> > > [407484.696750]  [<ffffffff8115b7f5>] ? perf_event_wakeup+0x5/0xf0
> > > [407484.703640]  [<ffffffff811f50ef>] ? kill_fasync+0xf/0xf0
> > > [407484.710008]  [<ffffffff8115b828>] perf_event_wakeup+0x38/0xf0
> > > [407484.716798]  [<ffffffff8115b7f5>] ? perf_event_wakeup+0x5/0xf0
> > > [407484.723696]  [<ffffffff8115b913>] perf_pending_event+0x33/0x60
> > > [407484.730570]  [<ffffffff8114cc7c>] irq_work_run_list+0x4c/0x80
> > > [407484.737392]  [<ffffffff8114ccc8>] irq_work_run+0x18/0x40
> > > [407484.743765]  [<ffffffff8101955f>] smp_trace_irq_work_interrupt+0x3f/0xc0
> > > [407484.751579]  [<ffffffff816c01fd>] trace_irq_work_interrupt+0x6d/0x80
> 
> > > [407484.799195] ---[ end trace 55752a03ec8ab979 ]---
> > 
> > That looks like tail recursive fun! An irq work that raises and irq work
> > ad infinitum. Lemme see if I can squash that.. didn't we have something
> > like this before... /me goes look.
> 
> 
> Does this make it go away?
> 
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -4413,6 +4413,8 @@ static void perf_pending_event(struct ir
>  	struct perf_event *event = container_of(entry,
>  			struct perf_event, pending);
>  
> +	int rctx = perf_swevent_get_recursion_context();
> +

hum, you should check the rctx

	if (rctx == -1)
		return;

also this recursion is bound to swevent_htable, should we rather add
separate ctx data for irq_work to limit the clashing with SW events?

jirka

>  	if (event->pending_disable) {
>  		event->pending_disable = 0;
>  		__perf_event_disable(event);
> @@ -4422,6 +4424,8 @@ static void perf_pending_event(struct ir
>  		event->pending_wakeup = 0;
>  		perf_event_wakeup(event);
>  	}
> +
> +	perf_swevent_put_recursion_context(rctx);
>  }
>  
>  /*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ