lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 28 Apr 2008 09:13:48 -0400 (EDT)
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Pekka Paalanen <pq@....fi>
cc:	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
	akpm@...l.org, Peter Zijlstra <peterz@...radead.org>,
	Soeren Sandmann Pedersen <sandmann@...hat.com>,
	Steven Rostedt <srostedt@...hat.com>
Subject: Re: [PATCH 1/3] ftrace: add logic to record overruns


On Sat, 26 Apr 2008, Pekka Paalanen wrote:

> On Mon, 21 Apr 2008 17:09:36 -0400
> Steven Rostedt <rostedt@...dmis.org> wrote:
>
> ....
> > @@ -2510,6 +2511,11 @@ tracing_read_pipe(struct file *filp, cha
> >  	for_each_cpu_mask(cpu, mask) {
> >  		data = iter->tr->data[cpu];
> >  		__raw_spin_lock(&data->lock);
> > +
> > +		if (data->overrun > iter->last_overrun[cpu])
> > +			iter->overrun[cpu] +=
> > +				data->overrun - iter->last_overrun[cpu];
> > +		iter->last_overrun[cpu] = data->overrun;
>
> Option 1: move this code earlier.
> (Could also remove the `if' and make it unconditional.)

I'll move it. The condition was there because I could have sworn that
there's some way (perhaps just via an error) that lastrun could be greater
than the overrun. But perhaps it doesn't matter.

> > +++ linux-sched-devel.git/kernel/trace/trace.h	2008-04-21 14:38:09.000000000 -0400
> > @@ -102,6 +102,7 @@ struct trace_array_cpu {
> >  	void			*trace_head; /* producer */
> >  	void			*trace_tail; /* consumer */
> >  	unsigned long		trace_idx;
> > +	unsigned long		overrun;
>
> Option 2: Change this into atomic_t.

I could do that too.

> >   * results to users and which routines might sleep, etc:
> >   */
> >  struct trace_iterator {
> > -	struct trace_seq	seq;
> >  	struct trace_array	*tr;
> >  	struct tracer		*trace;
> > +	long			last_overrun[NR_CPUS];
> > +	long			overrun[NR_CPUS];
>
> The problem with these fields is that they are updated only after the read
> hook has been called in tracing_read_pipe(), which means all "N events
> lost!" entries will be one read call behind. This can be any number of
> events.
>
> If trace_array_cpu::overrun was atomic_t, I could process that in
> the read callback. But, trace_iterator::overrun would be good, because
> I could just reset it to zero every time I encounter a non-zero value.
>
> I can think of two options, the ones noted above: moving the update
> earlier, or using atomic_t. Or maybe do both, so that we don't have to
> lock the trace_array_cpu structs early and also avoid duplicating the
> overrun/last_overrun logic in a tracer.
>
> What shall we do?

I think you're right that we should move it before calling the read hook.
I didn't want to make the one value atomic because that would just add
another atomic operation in a fast path. I may wait on making that atomic.

-- Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ