lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 19 Jun 2015 00:06:17 -0700
From:	Alexei Starovoitov <ast@...mgrid.com>
To:	Daniel Wagner <daniel.wagner@...-carit.de>
CC:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v0] bpf: BPF based latency tracing

On 6/18/15 11:07 PM, Daniel Wagner wrote:
>> I'm only a bit suspicious of kprobes, since we have:
>> >NOKPROBE_SYMBOL(preempt_count_sub)
>> >but trace_preemp_on() called by preempt_count_sub()
>> >don't have this mark...
> The original commit indicates that anything called from
> preempt_disable() should also be marked as NOKPROBE_SYMBOL:
>
> 	commit 43627582799db317e966ecb0002c2c3c9805ec0f
> 	Author:	Srinivasa Ds<srinivasa@...ibm.com>   Sun Feb 24 00:24:04 2008
> 	Committer:	Linus Torvalds<torvalds@...dy.linux-foundation.org>   Sun Feb 24 02:13:24 2008
> 	Original File:	kernel/sched.c
>
> 	kprobes: refuse kprobe insertion on add/sub_preempt_counter()
...
> Obviously, this would render this patch useless.

well, I've tracked it to that commit as well, but I couldn't find
any discussion about kprobe crashes that led to that patch.
kprobe has its own mechanism to prevent recursion.

>>> >>+SEC("kprobe/trace_preempt_off")
> BTW, is there a reason why not supporting build-in
> tracepoints/events? It looks like it is only an artificial
> limitation of bpf_helpers.

The original bpf+tracing patch attached programs to both
tracepoints and kprobes, but there was a concern that it
promotes tracepoint arguments to stable ABI, since tracepoints in 
general are considered stable by most maintainers.
So we decided to go for bpf+kprobe for now, since kprobes
are unstable, so no one can complain that scripts suddenly
break because probed function disappears or its arguments change.
Since then we've discussed attaching to trace marker, debug tracepoints
and other things. So hopefully soon it will be ready.

>>> >>+int bpf_prog1(struct pt_regs *ctx)
>>> >>+{
>>> >>+    int cpu = bpf_get_smp_processor_id();
>>> >>+    u64 *ts = bpf_map_lookup_elem(&my_map, &cpu);
>>> >>+
>>> >>+    if (ts)
>>> >>+        *ts = bpf_ktime_get_ns();
>> >
>> >btw, I'm planning to add native per-cpu maps which will
>> >speed up things more and reduce measurement overhead.
> Funny I was about to suggest something like this :)
>
>> >I think you can retarget this patch to net-next and send
>> >it to netdev. It's not too late for this merge window.
> I'll rebase it to net-next.

Great :)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ