[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMEtUuxNs=HBhtFwhBX3HOU6+QqtYZ+v7sJ74+cg-o+bi5GdoA@mail.gmail.com>
Date: Wed, 30 Jul 2014 10:17:17 -0700
From: Alexei Starovoitov <ast@...mgrid.com>
To: "Frank Ch. Eigler" <fche@...hat.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Ingo Molnar <mingo@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andy Lutomirski <luto@...capital.net>,
Steven Rostedt <rostedt@...dmis.org>,
Daniel Borkmann <dborkman@...hat.com>,
Chema Gonzalez <chema@...gle.com>,
Eric Dumazet <edumazet@...gle.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Arnaldo Carvalho de Melo <acme@...radead.org>,
Jiri Olsa <jolsa@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Kees Cook <keescook@...omium.org>,
Linux API <linux-api@...r.kernel.org>,
Network Development <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RFC v3 net-next 3/3] samples: bpf: eBPF dropmon example in C
On Wed, Jul 30, 2014 at 8:45 AM, Frank Ch. Eigler <fche@...hat.com> wrote:
> For the record, this is not entirely accurate as to dtrace. dtrace
> delegates aggregation and most reporting to userspace. Also,
> systemtap is "short and deterministic" even for aggregations & nice
> graphs, but since it limits its storage & cpu consumption, its
> arrays/reports cannot get super large.
My understanding of systemtap is that the whole .stp script is converted
to C, compiled as .ko and loaded, so all map walking and prints are
happening in the kernel. Similarly for ktap which has special functions
in kernel to print histograms.
I thought dtrace printf are also happening from the kernel. What is the
trick they use to know which pieces of dtrace script should be run in
user space?
In ebpf examples there are two C files: one for kernel with ebpf isa
and one for userspace as native. I thought about combining them,
but couldn't figure out a clean way of doing it.
>> [...]
>> +SEC("events/skb/kfree_skb")
>> +int bpf_prog2(struct bpf_context *ctx)
>> +{
>> +[...]
>> + value = bpf_map_lookup_elem(&my_map, &loc);
>> + if (value)
>> + (*(long *) value) += 1;
>> + else
>> + bpf_map_update_elem(&my_map, &loc, &init_val);
>> + return 0;
>> +}
>
> What kind of locking/serialization is provided by the ebpf runtime
> over shared variables such as my_map?
it's traditional rcu scheme.
Programs are running under rcu_read_lock(), so that
bpf_map_lookup_elem() can return pointer to map value which
won't disappear while program is running.
In-kernel map implementation needs to use rcu style to match
ebpf program assumptions. map implementation is enforcing
the limit to the number of elements.
I didn't post 'array' type of map yet. bpf_map_lookup in this
implementation will just return 'base + index' pointer.
Regardless of type of map the same ebpf program running on
different cpus may lookup the same 'key' and receive the same
map value pointer. In such case concurrent write access to
map value can be done with bpf_xadd instruction, though
using normal read/write is also allowed. In some cases
the speed of racy var++ is preferred over 'lock xadd'.
There are no lock/unlock function helpers available to ebpf
programs, since program may terminate early with div by zero
for example, so in-kernel lock helper implementation would
be complicated and slow. It's possible to do, but for the use
cases so far there is no need.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists