lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Mar 2019 07:45:38 -0700
From:   Doug Anderson <dianders@...omium.org>
To:     Daniel Thompson <daniel.thompson@...aro.org>
Cc:     Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...hat.com>,
        Jason Wessel <jason.wessel@...driver.com>,
        kgdb-bugreport@...ts.sourceforge.net,
        Brian Norris <briannorris@...omium.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v5 2/3] tracing: Add trace_total_entries() / trace_total_entries_cpu()

Hi,

On Tue, Mar 19, 2019 at 4:25 AM Daniel Thompson
<daniel.thompson@...aro.org> wrote:
>
> On Mon, Mar 18, 2019 at 01:47:40PM -0700, Douglas Anderson wrote:
> > These two new exported functions will be used in a future patch by
> > kdb_ftdump() to quickly skip all but the last few trace entries.
> >
> > Suggested-by: Steven Rostedt <rostedt@...dmis.org>
> > Signed-off-by: Douglas Anderson <dianders@...omium.org>
> > ---
> >
> > Changes in v5: None
> > Changes in v4:
> > - trace_total_entries() / trace_total_entries_cpu() new for v4
> >
> > Changes in v3: None
> >
> >  kernel/trace/trace.c | 65 ++++++++++++++++++++++++++++++++++----------
> >  kernel/trace/trace.h |  3 ++
> >  2 files changed, 53 insertions(+), 15 deletions(-)
> >
> > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> > index ccd759eaad79..7afc90f82e53 100644
> > --- a/kernel/trace/trace.c
> > +++ b/kernel/trace/trace.c
> > @@ -3490,34 +3490,69 @@ static void s_stop(struct seq_file *m, void *p)
> >       trace_event_read_unlock();
> >  }
> >
> > +static void
> > +get_total_entries_cpu(struct trace_buffer *buf, unsigned long *total,
> > +                   unsigned long *entries, int cpu)
> > +{
> > +     unsigned long count;
> > +
> > +     count = ring_buffer_entries_cpu(buf->buffer, cpu);
> > +     /*
> > +      * If this buffer has skipped entries, then we hold all
> > +      * entries for the trace and we need to ignore the
> > +      * ones before the time stamp.
> > +      */
> > +     if (per_cpu_ptr(buf->data, cpu)->skipped_entries) {
> > +             count -= per_cpu_ptr(buf->data, cpu)->skipped_entries;
> > +             /* total is the same as the entries */
> > +             *total = count;
> > +     } else
> > +             *total = count +
> > +                     ring_buffer_overrun_cpu(buf->buffer, cpu);
> > +     *entries = count;
> > +}
> > +
> >  static void
> >  get_total_entries(struct trace_buffer *buf,
> >                 unsigned long *total, unsigned long *entries)
> >  {
> > -     unsigned long count;
> > +     unsigned long t, e;
> >       int cpu;
> >
> >       *total = 0;
> >       *entries = 0;
> >
> >       for_each_tracing_cpu(cpu) {
> > -             count = ring_buffer_entries_cpu(buf->buffer, cpu);
> > -             /*
> > -              * If this buffer has skipped entries, then we hold all
> > -              * entries for the trace and we need to ignore the
> > -              * ones before the time stamp.
> > -              */
> > -             if (per_cpu_ptr(buf->data, cpu)->skipped_entries) {
> > -                     count -= per_cpu_ptr(buf->data, cpu)->skipped_entries;
> > -                     /* total is the same as the entries */
> > -                     *total += count;
> > -             } else
> > -                     *total += count +
> > -                             ring_buffer_overrun_cpu(buf->buffer, cpu);
> > -             *entries += count;
> > +             get_total_entries_cpu(buf, &t, &e, cpu);
> > +             *total += t;
> > +             *entries += e;
> >       }
> >  }
> >
> > +unsigned long trace_total_entries_cpu(struct trace_array *tr, int cpu)
> > +{
> > +     unsigned long total, entries;
> > +
> > +     if (!tr)
> > +             tr = &global_trace;
>
> This function is only ever called with tr set to NULL which means tr is
> an argument looking for a user.
>
> I wouldn't mind except if this was following copying prior art to keep
> the API feel the same but I can't find any other trace function where
> the trace_array can be substituted for NULL. AFAICT all the existing
> sites where global_trace is used will use it unconditionally.

Happy to change this if you guys want.  At the moment the
trace_total_entries() comes straight from Steven's suggestion at:

https://lkml.kernel.org/r/20190315144130.1aa36931@gandalf.local.home

Ugh, but while looking at this it looks like I totally forgot one
piece of feedback from Steven: to move the disabling of the tracing
out so it's not re-enabled after counting but before dumping.  Not
sure how I missed that.  :(

I'll wait to spin until I hear from Steven if he wants me to remove
the "struct trace_array *tr" argument, then I'll post out a v6 with
the fix I forgot...


-Doug

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ