lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190315170822.6316f3cb@gandalf.local.home>
Date:   Fri, 15 Mar 2019 17:08:22 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Doug Anderson <dianders@...omium.org>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Jason Wessel <jason.wessel@...driver.com>,
        Daniel Thompson <daniel.thompson@...aro.org>,
        kgdb-bugreport@...ts.sourceforge.net,
        Brian Norris <briannorris@...omium.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3] tracing: kdb: Allow ftdump to skip all but the last
 few lines

On Fri, 15 Mar 2019 13:54:11 -0700
Doug Anderson <dianders@...omium.org> wrote:

> > Hmm, actually your method still wont work, because you are only
> > counting entries not lines. The stack dumps are considered a single
> > line but will print multiple lines.  
> 
> LOL.  Back to v1 then?  v1 of the patch [1] was definitely consistent
> even if it was very slow.  Specifically whatever was being counted by
> ftrace_dump_buf() (entries or lines or something in between) was
> definitely used to figure out how many to skip.

You'll need to read the line itself. I don't see v1 giving a different
count than the get_total_entries() does.

> 
> 
> > Not only that, perhaps you should break apart ftrace_dump_buf(),
> > because calling it twice (or doing what I suggested), wont stop tracing
> > in between, and the size of the buffer might change between the two
> > calls.
> >
> > You need to move out:
> >
> >         for_each_tracing_cpu(cpu) {
> >                 atomic_inc(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
> >         }
> >
> >
> > and
> >
> >         for_each_tracing_cpu(cpu) {
> >                 atomic_dec(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
> >         }
> >
> > to disable tracing while you do this.  
> 
> Happy to do this in a v4.  I think it's very unlikely to matter
> because we're in kdb and thus all the other CPUs are stopped and
> interrupts are disabled.  ...so unless an NMI adds something to the
> trace buffer in between the two calls the counts will be the same.
> ...but it certainly is cleaner.

NMI's can indeed add to the trace.

> 
> 
> > The get_total_entries() is the faster approach to get the count, but in
> > either case, the count should end up the same.  
> 
> If you're OK with going back to the super slow mechanism in v1 I can
> do that and we can be guaranteed we're consistent.  Presumably it
> can't be _that_ slow because we're going to use the same mechanism to
> skip the lines later.
> 
> So, if you agree, I'll send out a v4 that looks like v1 except that it
> disables / enables tracing directly in kdb_ftdump() so it stays
> disabled for both calls.
> 
> 
> [1] https://lkml.kernel.org/r/20190305233150.159633-1-dianders@chromium.org
> 

But this part of the patch:

> -static void ftrace_dump_buf(int skip_lines, long cpu_file)
> +static int ftrace_dump_buf(int skip_lines, long cpu_file, bool quiet)
>  {
>  	/* use static because iter can be a bit big for the stack */
>  	static struct trace_iterator iter;
> @@ -39,7 +39,9 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
>  	/* don't look at user memory in panic mode */
>  	tr->trace_flags &= ~TRACE_ITER_SYM_USEROBJ;
>  
> -	kdb_printf("Dumping ftrace buffer:\n");
> +	if (!quiet)
> +		kdb_printf("Dumping ftrace buffer (skipping %d lines):\n",
> +			   skip_lines);
>  
>  	/* reset all but tr, trace, and overruns */
>  	memset(&iter.seq, 0,
> @@ -66,25 +68,29 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
>  	}
>  
>  	while (trace_find_next_entry_inc(&iter)) {
> -		if (!cnt)
> -			kdb_printf("---------------------------------\n");
> -		cnt++;
> -
> -		if (!skip_lines) {
> -			print_trace_line(&iter);
> -			trace_printk_seq(&iter.seq);
> -		} else {
> -			skip_lines--;
> +		if (!quiet) {
> +			if (!cnt)
> +				kdb_printf("---------------------------------\n");
> +
> +			if (!skip_lines) {
> +				print_trace_line(&iter);
> +				trace_printk_seq(&iter.seq);
> +			} else {
> +				skip_lines--;

How do you know that trace_printk_seq() didn't produce more than one line?

If the event is a stack dump, you need to read the seq, and count the
number of '\n' that are added.

The cnt in this code is no different than the get_total_entries() that
I suggested.

Now the really ugly solution is to have:

	print_trace_line(&iter);

	if (quiet || skip_lines) {
		lines = count_all_newlines(&iter.seq);
		if (skip_lines) {
			skip_lines -= lines;
			if (skip_lines < 0)
				skip_lines = 0;
		}
		cnt += lines
	} else (!quiet) {
		trace_printk_seq(&iter.seq);
	}

Where the count_all_newlines() needs to be created to read the seq
buffer and return the number of '\n' that are found.

-- Steve



> +			}
>  		}
> +		cnt++;
>  
>  		if (KDB_FLAG(CMD_INTERRUPT))
>  			goto out;
>  	}
>  
> -	if (!cnt)
> -		kdb_printf("   (ftrace buffer empty)\n");
> -	else
> -		kdb_printf("---------------------------------\n");
> +	if (!quiet) {
> +		if (!cnt)
> +			kdb_printf("   (ftrace buffer empty)\n");
> +		else
> +			kdb_printf("---------------------------------\n");
> +	}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ