lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240112100641.3675edad@gandalf.local.home>
Date: Fri, 12 Jan 2024 10:06:41 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Vincent Donnefort <vdonnefort@...gle.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, mhiramat@...nel.org,
 linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
 kernel-team@...roid.com
Subject: Re: [PATCH v11 2/5] ring-buffer: Introducing ring-buffer mapping
 functions

On Fri, 12 Jan 2024 09:13:02 +0000
Vincent Donnefort <vdonnefort@...gle.com> wrote:

> > > +
> > > +	unsigned long	subbufs_touched;
> > > +	unsigned long	subbufs_lost;
> > > +	unsigned long	subbufs_read;  
> > 
> > Now I'm thinking we may not want this exported, as I'm not sure it's useful.  
> 
> touched and lost are not useful now, but it'll be for my support of the
> hypervisor tracing, that's why I added them already.
> 
> subbufs_read could probably go away though as even in that case I can track that
> in the reader.

Yeah, but I think we can leave it off for now.

This is something we could extend the structure with. In fact, it can be
different for different buffers!

That is, since they are pretty meaningless for the normal ring buffer, I
don't think we need to export it. But if it's useful for your hypervisor
buffer, it can export it. It would be a good test to know if the extension
works. Of course, that means if the normal ring buffer needs more info, it
must also include this as well, as it will already be part of the extended
structure.


> 
> > 
> > Vincent, talking with Mathieu, he was suggesting that we only export what
> > we really need, and I don't think we need the above. Especially since they
> > are not exposed in the stats file.
> > 
> >   
> > > +
> > > +	struct {
> > > +		unsigned long	lost_events;
> > > +		__u32		id;
> > > +		__u32		read;
> > > +	} reader;  
> > 
> > The above is definitely needed, as all of it is used to read the
> > reader-page of the sub-buffer.
> > 
> > lost_events is the number of lost events that happened before this
> > sub-buffer was swapped out.
> > 
> > Hmm, Vincent, I just notice that you update the lost_events as:
> >   
> > > +static void rb_update_meta_page(struct ring_buffer_per_cpu *cpu_buffer)
> > > +{
> > > +	struct trace_buffer_meta *meta = cpu_buffer->meta_page;
> > > +
> > > +	WRITE_ONCE(meta->entries, local_read(&cpu_buffer->entries));
> > > +	WRITE_ONCE(meta->overrun, local_read(&cpu_buffer->overrun));
> > > +	WRITE_ONCE(meta->read, cpu_buffer->read);
> > > +
> > > +	WRITE_ONCE(meta->subbufs_touched, local_read(&cpu_buffer->pages_touched));
> > > +	WRITE_ONCE(meta->subbufs_lost, local_read(&cpu_buffer->pages_lost));
> > > +	WRITE_ONCE(meta->subbufs_read, local_read(&cpu_buffer->pages_read));
> > > +
> > > +	WRITE_ONCE(meta->reader.read, cpu_buffer->reader_page->read);
> > > +	WRITE_ONCE(meta->reader.id, cpu_buffer->reader_page->id);
> > > +	WRITE_ONCE(meta->reader.lost_events, cpu_buffer->lost_events);
> > > +}  
> > 
> > The lost_events may need to be handled differently, as it doesn't always
> > get cleared. So it may be stale data.  
> 
> My idea was to check this value after the ioctl(). If > 0 then events were lost
> between the that ioctl() and the previous swap.
> 
> But now if with "[PATCH] ring-buffer: Have mmapped ring buffer keep track of
> missed events" we put this information in the page itself, we can get rid of
> this field.
> 

I'm thinking both may be good, as the number of dropped events are not
added if there's no room to put it at the end of the ring buffer. When
there's no room, it just sets a flag that there was missed events but
doesn't give how many events were missed.

If anything, it should be in both locations. In the sub-buffer header, to
be consistent with the read/splice version, and in the meta page were if
there's no room to add the count, it can be accessed in the meta page.

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ