lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140313195816.GJ21124@linux.vnet.ibm.com>
Date:	Thu, 13 Mar 2014 12:58:16 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
	Frederic Weisbecker <fweisbec@...il.com>,
	Mike Galbraith <efault@....de>,
	Paul Mackerras <paulus@...ba.org>,
	Stephane Eranian <eranian@...gle.com>,
	Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH] [RFC] perf: Fix a race between ring_buffer_detach() and
 ring_buffer_wakeup()

On Fri, Mar 07, 2014 at 03:38:46PM +0200, Alexander Shishkin wrote:
> This is more of a problem description than an actual bugfix, but currently
> ring_buffer_detach() can kick in while ring_buffer_wakeup() is traversing
> the ring buffer's event list, leading to cpu stalls.
> 
> What this patch does is crude, but fixes the problem, which is: one rcu
> grace period has to elapse between ring_buffer_detach() and subsequent
> ring_buffer_attach(), otherwise either the attach will fail or the wakeup
> will misbehave. Also, making it a call_rcu() callback will make it race
> with attach().
> 
> Another solution that I see is to check for list_empty(&event->rb_entry)
> before wake_up_all() in ring_buffer_wakeup() and restart the list
> traversal if it is indeed empty, but that is ugly too as there will be
> extra wakeups on some events.
> 
> Anything that I'm missing here? Any better ideas?

Not sure it qualifies as "better", but git call to ring_buffer_detach()
is going to free the event anyway, so the synchronize_rcu() and the
INIT_LIST_HEAD() should not be needed in that case.  I am guessing that
the same is true for perf_mmap_close().

So that leaves the call in perf_event_set_output(), which detaches from an
old rb before attaching that same event to a new one.  So maybe have the
synchronize_rcu() and INIT_LIST_HEAD() instead be in the "if (old_rb)",
which might be a reasonably uncommon case?

							Thanx, Paul

> Signed-off-by: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
> Cc: Paul McKenney <paulmck@...ux.vnet.ibm.com>
> ---
>  kernel/events/core.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 661951a..bce41e0 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -3861,7 +3861,7 @@ static void ring_buffer_attach(struct perf_event *event,
> 
>  	spin_lock_irqsave(&rb->event_lock, flags);
>  	if (list_empty(&event->rb_entry))
> -		list_add(&event->rb_entry, &rb->event_list);
> +		list_add_rcu(&event->rb_entry, &rb->event_list);
>  	spin_unlock_irqrestore(&rb->event_lock, flags);
>  }
> 
> @@ -3873,9 +3873,11 @@ static void ring_buffer_detach(struct perf_event *event, struct ring_buffer *rb)
>  		return;
> 
>  	spin_lock_irqsave(&rb->event_lock, flags);
> -	list_del_init(&event->rb_entry);
> +	list_del_rcu(&event->rb_entry);
>  	wake_up_all(&event->waitq);
>  	spin_unlock_irqrestore(&rb->event_lock, flags);
> +	synchronize_rcu();
> +	INIT_LIST_HEAD(&event->rb_entry);
>  }
> 
>  static void ring_buffer_wakeup(struct perf_event *event)
> -- 
> 1.9.0
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ