lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAEr6+EAn3NZ+1P+yD6HHMmLHBtASOdVXHd64hY2xQyrLNddb-Q@mail.gmail.com>
Date:   Mon, 10 Jan 2022 10:00:37 +0800
From:   Jeff Xie <xiehuan09@...il.com>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     Masami Hiramatsu <mhiramat@...nel.org>, mingo@...hat.com,
        Tom Zanussi <zanussi@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH v6 1/5] trace: Add trace any kernel object

Hi Steven,

Welcome back and look forward to playing this patch set with you again
in 2022 ;-)

Thank you and Masima for your guidance on this patchset in 2021. I
learned a lot.


On Sat, Jan 8, 2022 at 8:21 AM Steven Rostedt <rostedt@...dmis.org> wrote:
>
> Sorry for the late reply, I'm currently unemployed (for another week) and
> was spending all my time renovating my office. I finished my office and I'm
> now trying to get back up to speed.
>
> On Sun, 19 Dec 2021 12:07:23 +0900
> Masami Hiramatsu <mhiramat@...nel.org> wrote:
>
>
> > > > > +#include "trace_output.h"
> > > > > +#include <linux/freelist.h>
> > > > > +
> > > > > +static DEFINE_PER_CPU(atomic_t, trace_object_event_disable);
> > > >
> > > > atomic_t is for atomic operation which must be shared among cpus. On the
> > > > other hand, per-cpu variable is used for the core-local storage or flags,
> > > > other cpus never touch it. Thus the per-cpu atomic_t is very strange.
> > > >
> > >
> > > From the patch V1, I cloned it from the  function_test_events_call()
> > > in kernel/trace/trace_events.c
> > >
> > > commit: 9ea21c1ecdb35ecdcac5fd9d95f62a1f6a7ffec0
> > > tracing/events: perform function tracing in event selftests
> > > Author:     Steven Rostedt <srostedt@...hat.com>
> >
> > Hmm, OK.
>
> Ug, showing me my skeletons in my closet! That commit is from 2009, where I
> didn't know any better ;-)
>
> >
> > >
> > > It should be to prevent being preempted by the interrupt context in
> > > the process of adding one
> >
> > Yeah, I think so.
> >
> > The commit says "some bugs" but it is not sure what actually needs to be
> > cared.
> >
> >     tracing/events: perform function tracing in event selftests
> >
> >     We can find some bugs in the trace events if we stress the writes as well.
> >     The function tracer is a good way to stress the events.
> >
> > Steve, can you tell me what was the problem?
> >
> > I think we don't need per-cpu atomic_t because the counter is increment
> > and decrement. Thus when quiting the nested ftrace handler on the same CPU,
> > the counter comes back to the same value. We don't need to care about
> > atomic increment.
> >
> > I mean, if we use the normal per-cpu "unsigned int" as a counter, the
> > operation of "counter++" becomes;
>
> Yes, that was from the days of being extra paranoid. A simple counter would
> work, with a barrier() in place such that gcc doesn't cause any issues.
>
> I may have to go back and revisit all that code and clean it up a bit.
>
> >
> > load counter to reg1
> > [1]
> > reg1 = reg1 + 1
> > store reg1 to counter
> >
> > And if an interrupt occurs at [1], the following happens.
> >
> > load counter to reg1 # counter = 0
> >
> >   (interrupt - save reg1)
> >   load counter to reg1  # counter = 0
> >   reg1 = reg1 + 1
> >   store reg1 to counter  # counter = 1
> >   ...
> >   load counter to reg1  # counter = 1
> >   reg1 = reg1 - 1
> >   store reg1 to counter  # counter = 0
> >   (iret - restore reg1)
> >
> > reg1 = reg1 + 1
> > store reg1 to counter
> >
> > So, even if the operation is not atomic, there seems no problem.
> > What else scenario we have to worry?
> >
> > (BTW, what is the ftrace_test_recursion_trylock()? Is that also
> > for detecting nesting case??)
>
> Yes, the ftrace_test_recursion_trylock() is for finding recursions.
>
> The above code is from the early days of ftrace, and was only used in
> testing at boot up. It's not something to copy from ;-)
>
> >
> > > > > +static DEFINE_RAW_SPINLOCK(object_spin_lock);
> > > > > +static struct trace_event_file event_trace_file;
> > > > > +static const int max_args_num = 6;
> > > > > +static const int max_obj_pool = 10;
> > > > > +static atomic_t trace_object_ref;
> > > > > +static int exit_trace_object(void);
> > > > > +static int init_trace_object(void);
> > > > > +
> > > >
> > > > Please add more comments to the code itself. Explain why this is needed
> > > > and how it works for which case. That will lead deeper understanding.
> > > >
> > >
> > > I agree, I will add more comments in the next version.
> > >
> > > > > +struct object_instance {
> > > > > +     void *object;
> > > > > +     struct freelist_node free_list;
> > > > > +     struct list_head active_list;
> > > > > +};
> > > > > +
> > > > > +struct obj_pool {
> > > > > +     struct freelist_head free_list;
> > > > > +     struct list_head active_list;
> > > > > +};
> > > > > +static struct obj_pool *obj_pool;
> > > > > +
> > > > > +static bool object_exist(void *obj)
> > > > > +{
> > > > > +     struct object_instance *inst;
> > > > > +     bool ret = false;
> > > > > +
> > > > > +     list_for_each_entry_rcu(inst, &obj_pool->active_list, active_list) {
> > > > > +             if (inst->object == obj) {
> > > > > +                     ret = true;
> > > > > +                     goto out;
> > > > > +             }
> > > > > +     }
> > > > > +out:
> > > > > +     return ret;
>
> BTW, the above really should be:
>
> static bool object_exist(void *obj)
> {
>         struct object_instance *inst;
>
>         list_for_each_entry_rcu(inst, &obj_pool->active_list, active_list) {
>                 if (inst->object == obj)
>                         return true;
>         }
>         return false;
> }

Thanks. Masami suggested that it is better to use fixed-size array,
I will be ready to send the next version.

> -- Steve
>
---
JeffXie

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ