lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241011104848.7f5b180b@gandalf.local.home>
Date: Fri, 11 Oct 2024 10:48:47 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Petr Pavlu <petr.pavlu@...e.com>
Cc: LKML <linux-kernel@...r.kernel.org>, Linux Trace Kernel
 <linux-trace-kernel@...r.kernel.org>, Masami Hiramatsu
 <mhiramat@...nel.org>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: [PATCH] ring-buffer: Have the buffer update counter be atomic

On Fri, 11 Oct 2024 10:01:32 -0400
Steven Rostedt <rostedt@...dmis.org> wrote:

> > Sorry for not replying to your last comment on my patch, I was ill.
> > 
> > The member ring_buffer_per_cpu.cnt is intended to be accessed under the
> > reader_lock, same as the pages pointer which it is tied to, so this
> > change shouldn't be strictly needed.
> >   
> 
> Right, but there was one location that the cnt was updated outside the
> lock. The one I commented on. But instead of adding a lock around it, I
> decided to just make it an atomic. Then there would be no need for the
> lock. Hmm, it still needs a memory barrier though. At least a
> smp_mb__after_atomic().

Hmm, I don't like how the update in ring_buffer_subbuf_order_set() is
unprotected. I think this is the proper patch:

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 696d422d5b35..0672df07b599 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -6774,6 +6774,7 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
 	}
 
 	for_each_buffer_cpu(buffer, cpu) {
+		unsigned long flags;
 
 		if (!cpumask_test_cpu(cpu, buffer->cpumask))
 			continue;
@@ -6800,11 +6801,15 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
 						     struct buffer_page, list);
 		list_del_init(&cpu_buffer->reader_page->list);
 
+		/* Synchronize with rb_check_pages() */
+		raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+
 		/* The cpu_buffer pages are a link list with no head */
 		cpu_buffer->pages = cpu_buffer->new_pages.next;
 		cpu_buffer->new_pages.next->prev = cpu_buffer->new_pages.prev;
 		cpu_buffer->new_pages.prev->next = cpu_buffer->new_pages.next;
 		cpu_buffer->cnt++;
+		raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
 
 		/* Clear the new_pages list */
 		INIT_LIST_HEAD(&cpu_buffer->new_pages);

Even though it's also protected by the buffer->mutex, I feel more
comfortable with this.

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ