lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 15 Apr 2023 11:46:35 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Yafang Shao <laoar.shao@...il.com>
Cc:     Alexei Starovoitov <ast@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux trace kernel <linux-trace-kernel@...r.kernel.org>,
        Masami Hiramatsu <mhiramat@...nel.org>,
        Mark Rutland <mark.rutland@....com>, bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH] tracing: Add generic test_recursion_try_acquire()

On Sat, 15 Apr 2023 22:33:17 +0800
Yafang Shao <laoar.shao@...il.com> wrote:

> 
> I don't have a clear idea why TRACE_CTX_TRANSITION must be needed, but
> it seems we have to do below code for the fentry test case,

The issue is that there's still cases that we can trace when
preempt_count hasn't been updated to the new context. That is,
preempt_count is used to determine if we are in softirq, hardirq or NMI
context. But there's some places that have:

 normal context:
 func_a() --> traced
   --> interrupt
    func_b() --> traced
    preempt_count_add(HARDIRQ_OFFSET)

Now we drop the second trace because it is flagged as a recursion when
in reality it is in a new context, but the preempt_count has not been
updated yet.

We are currently fixing these locations, but it's not there yet. And
since there's tools that relies on not dropping these locations, the
transition bit needs to be there until this situation is dealt with.

Can you change the tests to allow a single recursion?

-- Steve

> 
> diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h
> index d48cd92d2364..f6b4601dd604 100644
> --- a/include/linux/trace_recursion.h
> +++ b/include/linux/trace_recursion.h
> @@ -163,21 +163,8 @@ static __always_inline int
> trace_test_and_set_recursion(unsigned long ip, unsign
>   return -1;
> 
>   bit = trace_get_context_bit() + start;
> - if (unlikely(val & (1 << bit))) {
> - /*
> - * If an interrupt occurs during a trace, and another trace
> - * happens in that interrupt but before the preempt_count is
> - * updated to reflect the new interrupt context, then this
> - * will think a recursion occurred, and the event will be dropped.
> - * Let a single instance happen via the TRANSITION_BIT to
> - * not drop those events.
> - */
> - bit = TRACE_CTX_TRANSITION + start;
> - if (val & (1 << bit)) {
> - do_ftrace_record_recursion(ip, pip);
> - return -1;
> - }
> - }
> + if (unlikely(val & (1 << bit)))
> + return -1;
> 
>   val |= 1 << bit;
>   current->trace_recursion = val;
> 
> 
> > The reason is that the bpf_map_delete_elem() in this fentry SEC()[2]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ