[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAD=FV=VcpVYjwYq3zo6ODKzRBtDkjgEM2sXC-KMUa0ObskfTnw@mail.gmail.com>
Date: Fri, 15 Mar 2019 13:54:11 -0700
From: Doug Anderson <dianders@...omium.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Jason Wessel <jason.wessel@...driver.com>,
Daniel Thompson <daniel.thompson@...aro.org>,
kgdb-bugreport@...ts.sourceforge.net,
Brian Norris <briannorris@...omium.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3] tracing: kdb: Allow ftdump to skip all but the last
few lines
Hi,
On Fri, Mar 15, 2019 at 1:12 PM Steven Rostedt <rostedt@...dmis.org> wrote:
>
> On Fri, 15 Mar 2019 12:55:31 -0700
> Doug Anderson <dianders@...omium.org> wrote:
>
> > >
> > > Actually, you can add a function in trace.c that exposes
> > > get_total_entries:
> > >
> > > unsigned long trace_total_entries(struct trace_array *tr)
> > > {
> > > unsigned long total, entries;
> > >
> > > if (!tr)
> > > tr = &global_trace;
> > >
> > > get_total_entries(tr->trace_buffer, &total, &entries);
> > >
> > > return entries;
> > > }
> > >
> > > and then do:
> > > cnt = trace_total_entries(NULL);
> >
> > OK. I guess I'll need to figure out how to add an argument to limit
> > it to just one CPU too since the kdb command allows you to specify a
> > single CPU or all CPUs. I think the best way would mean adding an
> > argument to get_total_entries(). ...or I can just change the kdb
> > command to not allow specifying a CPU? Which do you prefer? I
> > personally haven't ever used the feature to just print the ftrace
> > buffer for a certain CPU so I'd be OK removing it.
>
> I would make a get_total_entries_cpu() that contains the guts of
> get_total_entries() which then calls get_total_entries_cpu()
>
> static void
> get_total_entries(struct trace_buffer *buf,
> unsigned long *total, unsigned long *entries)
> {
> unsigned long t, e;
> int cpu;
>
> *total = 0;
> *entries = 0;
>
> for_each_tracing_cpu(cpu)
> get_total_entries_cpu(buf, &t, &e, cpu);
> *total += t;
> *entries += e;
> }
> }
>
> >
> >
> > > Don't modify ftrace_dump_buf()
> >
> > I still kinda prefer modifying ftrace_dump_buf() just because it's
> > pretty important that the "count" we come up with match pretty exactly
> > the count that ftrace_dump_buf() will come up with. My worry probably
> > comes from my lack of experience with the internals of ftrace but, for
> > instance, I see things like "tr->trace_flags &=
> > ~TRACE_ITER_SYM_USEROBJ" in ftrace_dump_buf() and I worry that it will
> > affect the count. ...but if you tell me that I need not worry about
> > things like that then I won't. :-)
>
> Hmm, actually your method still wont work, because you are only
> counting entries not lines. The stack dumps are considered a single
> line but will print multiple lines.
LOL. Back to v1 then? v1 of the patch [1] was definitely consistent
even if it was very slow. Specifically whatever was being counted by
ftrace_dump_buf() (entries or lines or something in between) was
definitely used to figure out how many to skip.
> Not only that, perhaps you should break apart ftrace_dump_buf(),
> because calling it twice (or doing what I suggested), wont stop tracing
> in between, and the size of the buffer might change between the two
> calls.
>
> You need to move out:
>
> for_each_tracing_cpu(cpu) {
> atomic_inc(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
> }
>
>
> and
>
> for_each_tracing_cpu(cpu) {
> atomic_dec(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
> }
>
> to disable tracing while you do this.
Happy to do this in a v4. I think it's very unlikely to matter
because we're in kdb and thus all the other CPUs are stopped and
interrupts are disabled. ...so unless an NMI adds something to the
trace buffer in between the two calls the counts will be the same.
...but it certainly is cleaner.
> The get_total_entries() is the faster approach to get the count, but in
> either case, the count should end up the same.
If you're OK with going back to the super slow mechanism in v1 I can
do that and we can be guaranteed we're consistent. Presumably it
can't be _that_ slow because we're going to use the same mechanism to
skip the lines later.
So, if you agree, I'll send out a v4 that looks like v1 except that it
disables / enables tracing directly in kdb_ftdump() so it stays
disabled for both calls.
[1] https://lkml.kernel.org/r/20190305233150.159633-1-dianders@chromium.org
-Doug
Powered by blists - more mailing lists