lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 24 Feb 2021 13:14:05 -0500
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:     Michael Jeanson <mjeanson@...icios.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Yonghong Song <yhs@...com>, paulmck <paulmck@...nel.org>,
        Ingo Molnar <mingo@...hat.com>, acme <acme@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...hat.com>,
        Namhyung Kim <namhyung@...nel.org>,
        "Joel Fernandes, Google" <joel@...lfernandes.org>,
        bpf <bpf@...r.kernel.org>
Subject: Re: [RFC PATCH 0/6] [RFC] Faultable tracepoints (v2)

On Wed, 24 Feb 2021 11:59:35 -0500 (EST)
Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:
> 
> As a prototype solution, what I've done currently is to copy the user-space
> data into a kmalloc'd buffer in a preparation step before disabling preemption
> and copying data over into the per-cpu buffers. It works, but I think we should
> be able to do it without the needless copy.
> 
> What I have in mind as an efficient solution (not implemented yet) for the LTTng
> kernel tracer goes as follows:
> 
> #define COMMIT_LOCAL 0
> #define COMMIT_REMOTE 1
> 
> - faultable probe is called from system call tracepoint [ preemption/blocking/migration is allowed ]
>   - probe code calculate the length which needs to be reserved to store the event
>     (e.g. user strlen),
> 
>   - preempt disable -> [ preemption/blocking/migration is not allowed from here ]
>     - reserve_cpu = smp_processor_id()
>     - reserve space in the ring buffer for reserve_cpu
>       [ from that point on, we have _exclusive_ access to write into the ring buffer "slot"
>         from any cpu until we commit. ]
>   - preempt enable -> [ preemption/blocking/migration is allowed from here ]
> 

So basically the commit position here doesn't move until this task is
scheduled back in and the commit (remote or local) is updated.

To put it in terms of the ftrace ring buffer, where we have both a commit
page and a commit index, and it only gets moved by the first one to start a
commit stack (that is, interrupts that interrupted a write will not
increment the commit).

Now, I'm not sure how LTTng does it, but I could see issues for ftrace to
try to move the commit pointer (the pointer to the new commit page), as the
design is currently dependent on the fact that it can't happen while
commits are taken place.

Are the pages of the LTTng indexed by an array of pages?

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ