lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 Apr 2019 13:40:37 -0400
From:   Joel Fernandes <joel@...lfernandes.org>
To:     Daniel Bristot de Oliveira <bristot@...hat.com>
Cc:     linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Ingo Molnar <mingo@...hat.com>,
        Andy Lutomirski <luto@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Borislav Petkov <bp@...en8.de>,
        Peter Zijlstra <peterz@...radead.org>,
        "H. Peter Anvin" <hpa@...or.com>, Jiri Olsa <jolsa@...hat.com>,
        Namhyung Kim <namhyung@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Tommaso Cucinotta <tommaso.cucinotta@...tannapisa.it>,
        Romulo Silva de Oliveira <romulo.deoliveira@...c.br>,
        paulmck@...ux.vnet.ibm.com, Clark Williams <williams@...hat.com>,
        x86@...nel.org
Subject: Re: [RFC PATCH 0/7] Early task context tracking

On Tue, Apr 02, 2019 at 10:03:52PM +0200, Daniel Bristot de Oliveira wrote:
> Note: do not take it too seriously, it is just a proof of concept.
> 
> Some time ago, while using perf to check the automaton model, I noticed
> that perf was losing events. The same was reproducible with ftrace.
> 
> See: https://www.spinics.net/lists/linux-rt-users/msg19781.html
> 
> Steve pointed to a problem in the identification of the context
> execution used by the recursion control.
> 
> Currently, recursion control uses the preempt_counter to
> identify the current context. The NMI/HARD/SOFT IRQ counters
> are set in the preempt_counter in the irq_enter/exit functions.

Just started looking.

Thinking out loud... can we not just update the preempt_count as early on
entry and as late on exit, as possible, and fix it that way? (Haven't fully
yet looked into what could break if we did that.)

I also feel the context tracking should be unified, right now we already have
two methods AFAIK - preempt_count and lockdep. Now this is yet another third.
Granted lockdep cannot be enabled in production, but still. It will be nice
to unify these tracking methods and if there is a single point of all such
context tracking that works well, and even better if we can just fix
preempt_count and use that for non-debugging usecases.

Also I feel in_interrupt() etc should be updated to rely on such tracking
methods if something other than preempt_count is used..

thanks,

 - Joel


> In a trace, they are set like this:
> -------------- %< --------------------
>  0)   ==========> |
>  0)               |  do_IRQ() {		/* First C function */
>  0)               |    irq_enter() {
>  0)               |      		/* set the IRQ context. */
>  0)   1.081 us    |    }
>  0)               |    handle_irq() {
>  0)               |     		/* IRQ handling code */
>  0) + 10.290 us   |    }
>  0)               |    irq_exit() {
>  0)               |      		/* unset the IRQ context. */
>  0)   6.657 us    |    }
>  0) + 18.995 us   |  }
>  0)   <========== |
> -------------- >% --------------------
> 
> As one can see, functions (and events) that take place before the set
> and after unset the preempt_counter are identified in the wrong context,
> causing the miss interpretation that a recursion is taking place.
> When this happens, events are dropped.
> 
> To resolve this problem, the set/unset of the IRQ/NMI context needs to
> be done before the execution of the first C execution, and after its
> return. By doing so, and using this method to identify the context in the
> trace recursion protection, no more events are lost.
> 
> A possible solution is to use a per-cpu variable set and unset in the
> entry point of NMI/IRQs, before calling the C handler. This possible
> solution is presented in the next patches as a proof of concept, for
> x86_64. However, other ideas might be better than mine... so...
> 
> Daniel Bristot de Oliveira (7):
>   x86/entry: Add support for early task context tracking
>   trace: Move the trace recursion context enum to trace.h and reuse it
>   trace: Optimize trace_get_context_bit()
>   trace/ring_buffer: Use trace_get_context_bit()
>   trace: Use early task context tracking if available
>   events: Create an trace_get_context_bit()
>   events: Use early task context tracking if available
> 
>  arch/x86/entry/entry_64.S       |  9 ++++++
>  arch/x86/include/asm/irqflags.h | 30 ++++++++++++++++++++
>  arch/x86/kernel/cpu/common.c    |  4 +++
>  include/linux/irqflags.h        |  4 +++
>  kernel/events/internal.h        | 50 +++++++++++++++++++++++++++------
>  kernel/softirq.c                |  5 +++-
>  kernel/trace/ring_buffer.c      | 28 ++----------------
>  kernel/trace/trace.h            | 46 ++++++++++++++++++++++--------
>  8 files changed, 129 insertions(+), 47 deletions(-)
> 
> -- 
> 2.20.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ