lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5501BC5A.6000204@plumgrid.com>
Date:	Thu, 12 Mar 2015 09:18:34 -0700
From:	Alexei Starovoitov <ast@...mgrid.com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Ingo Molnar <mingo@...nel.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Namhyung Kim <namhyung@...nel.org>,
	Arnaldo Carvalho de Melo <acme@...radead.org>,
	Jiri Olsa <jolsa@...hat.com>,
	Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
	"David S. Miller" <davem@...emloft.net>,
	Daniel Borkmann <daniel@...earbox.net>,
	linux-api@...r.kernel.org, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 tip 2/8] tracing: attach BPF programs to kprobes

On 3/12/15 8:15 AM, Peter Zijlstra wrote:
> On Tue, Mar 10, 2015 at 09:18:48PM -0700, Alexei Starovoitov wrote:
>> +unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx)
>> +{
>> +	unsigned int ret;
>> +	int cpu;
>> +
>> +	if (in_nmi()) /* not supported yet */
>> +		return 1;
>> +
>> +	preempt_disable_notrace();
>> +
>> +	cpu = raw_smp_processor_id();
>> +	if (unlikely(per_cpu(bpf_prog_active, cpu)++ != 0)) {
>> +		/* since some bpf program is already running on this cpu,
>> +		 * don't call into another bpf program (same or different)
>> +		 * and don't send kprobe event into ring-buffer,
>> +		 * so return zero here
>> +		 */
>> +		ret = 0;
>> +		goto out;
>> +	}
>> +
>> +	rcu_read_lock();
>
> You've so far tried very hard to not get into tracing; and then you call
> rcu_read_lock() :-)
>
> So either document why this isn't a problem, provide
> rcu_read_lock_notrace() or switch to RCU-sched and thereby avoid the
> problem.

I don't see the problem.
I actually do turn on func and func_graph tracers from time to time to
debug bpf core itself. Why would tracing interfere with anything that
this patch is doing? When we're inside tracing processing, we need to
use only _notrace() helpers otherwise recursion will hurt, but this
code is not invoked from there. It's called from 
kprobe_ftrace_handler|kprobe_int3_handler->kprobe_dispatcher->
kprobe_perf_func->trace_call_bpf which all are perfectly traceable.
Probably my copy paste of preempt_disable_notrace() line from
stack_trace_call() became source of confusion? I believe
normal preempt_disable() here will be just fine.
It's actually redundant too, since preemption is disabled by kprobe
anyway. Please help me understand what I'm missing.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ