[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190312170556.GJ4939@kernel.org>
Date: Tue, 12 Mar 2019 14:05:56 -0300
From: Arnaldo Carvalho de Melo <arnaldo.melo@...il.com>
To: Song Liu <songliubraving@...com>
Cc: Arnaldo Carvalho de Melo <arnaldo.melo@...il.com>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
Networking <netdev@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Kernel Team <Kernel-team@...com>,
Peter Zijlstra <peterz@...radead.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Jiri Olsa <jolsa@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
"sdf@...ichev.me" <sdf@...ichev.me>
Subject: Re: [PATCH v9 perf,bpf 09/15] perf, bpf: save btf information as
headers to perf.data
Em Tue, Mar 12, 2019 at 04:26:17PM +0000, Song Liu escreveu:
>
>
> > On Mar 12, 2019, at 8:16 AM, Arnaldo Carvalho de Melo <arnaldo.melo@...il.com> wrote:
> >
> > Em Tue, Mar 12, 2019 at 12:14:05PM -0300, Arnaldo Carvalho de Melo escreveu:
> >> Em Mon, Mar 11, 2019 at 10:30:45PM -0700, Song Liu escreveu:
> >>> +static void print_bpf_btf(struct feat_fd *ff, FILE *fp)
> >>> +{
> >>> + struct perf_env *env = &ff->ph->env;
> >>> + struct rb_root *root;
> >>> + struct rb_node *next;
> >>> +
> >>> + down_read(&env->bpf_progs.lock);
> >>> +
> >>> + root = &env->bpf_progs.btfs;
> >>> + next = rb_first(root);
> >>> +
> >>> + while (next) {
> >>> + struct btf_node *node;
> >>> +
> >>> + node = rb_entry(next, struct btf_node, rb_node);
> >>> + next = rb_next(&node->rb_node);
> >>> + fprintf(fp, "# btf info of id %u\n", node->id);
> >>
> >> So, I couldn't get this to work right now, and I have BPF programs that
> >> are loaded and that have BTF info:
> >>
> >> [root@...co ~]# bpftool prog | tail -6
> >> 208: tracepoint name sys_enter tag 819967866022f1e1 gpl
> >> loaded_at 2019-03-12T11:16:55-0300 uid 0
> >> xlated 528B jited 381B memlock 4096B map_ids 107,106,105
> >> 209: tracepoint name sys_exit tag c1bd85c092d6e4aa gpl
> >> loaded_at 2019-03-12T11:16:55-0300 uid 0
> >> xlated 256B jited 191B memlock 4096B map_ids 107,106
> >> [root@...co ~]#
> >>
> >>
> >> [root@...co ~]# bpftool map | tail -6
> >> 105: perf_event_array name __augmented_sys flags 0x0
> >> key 4B value 4B max_entries 8 memlock 4096B
> >> 106: array name syscalls flags 0x0
> >> key 4B value 1B max_entries 512 memlock 8192B
> >> 107: hash name pids_filtered flags 0x0
> >> key 4B value 1B max_entries 64 memlock 8192B
> >> [root@...co ~]#
> >>
> >> [root@...co ~]# bpftool map dump id 107
> >> [{
> >> "key": 8104,
> >> "value": true
> >> },{
> >> "key": 18868,
> >> "value": true
> >> }
> >> ]
> >> [root@...co ~]# ps ax|egrep 8104\|18868 | grep -v grep
> >> 8104 pts/8 S+ 0:07 trace -e recvmmsg
> >> 18868 ? Ssl 21:21 /usr/libexec/gnome-terminal-server
> >> [root@...co ~]#
> >>
> >> So I was expecting to see those btf lines there :-\
> >>
> >> All the patches up to this point I have already merged and tested in my
> >> local branch.
> >>
> >> Will continue right after lunch to try to figure out why this BTF info
> >> isn't landing on this new header feature...
> >
> > I've pushed what I have to my git.kernel.org repo, branch
> > tmp.perf/bpf-annotate.
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tmp.perf/bpf-annotate
> >
> > The HEAD is this cset.
> I am not sure I fully understand the problem here.
Trying to say it another way, when I start 'perf record', with the
patches in your series up to this one ('[PATCH v9 perf,bpf 09/15] perf,
bpf: save btf information as headers to perf.data'), and while there is
a BPF program running, that has BTF attached to it as can be seen by
using 'bpf map dump', 'perf report --header-only' isn't showing any line
with 'btf' on it.
So, using -v I notice that it is failing to get enable attr.bpf_event
and attr.ksymbol, which makes me go get another coffee to check that the
kernel was built from my 5.1 tree that has those events and not 5.0 that
doesn't ;-\ Sorry for the noise, will get back here after I check all
this :-)
But to recap, the BPF events I was getting were the synthesized ones...
[root@...co ~]# perf record -vv
Using CPUID GenuineIntel-6-8E-A
intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch
nr_cblocks: 0
affinity: SYS
------------------------------------------------------------
perf_event_attr:
size 112
{ sample_period, sample_freq } 4000
sample_type IP|TID|TIME|CPU|PERIOD
read_format ID
disabled 1
inherit 1
mmap 1
comm 1
freq 1
task 1
precise_ip 3
sample_id_all 1
exclude_guest 1
mmap2 1
comm_exec 1
ksymbol 1
bpf_event 1
------------------------------------------------------------
sys_perf_event_open: pid -1 cpu 0 group_fd -1 flags 0x8
sys_perf_event_open failed, error -22
switching off bpf_event
------------------------------------------------------------
perf_event_attr:
size 112
{ sample_period, sample_freq } 4000
sample_type IP|TID|TIME|CPU|PERIOD
read_format ID
disabled 1
inherit 1
mmap 1
comm 1
freq 1
task 1
precise_ip 3
sample_id_all 1
exclude_guest 1
mmap2 1
comm_exec 1
ksymbol 1
------------------------------------------------------------
sys_perf_event_open: pid -1 cpu 0 group_fd -1 flags 0x8
sys_perf_event_open failed, error -22
switching off ksymbol
------------------------------------------------------------
perf_event_attr:
size 112
{ sample_period, sample_freq } 4000
sample_type IP|TID|TIME|CPU|PERIOD
read_format ID
disabled 1
inherit 1
mmap 1
comm 1
freq 1
task 1
precise_ip 3
sample_id_all 1
exclude_guest 1
mmap2 1
comm_exec 1
------------------------------------------------------------
sys_perf_event_open: pid -1 cpu 0 group_fd -1 flags 0x8 = 4
sys_perf_event_open: pid -1 cpu 1 group_fd -1 flags 0x8 = 5
sys_perf_event_open: pid -1 cpu 2 group_fd -1 flags 0x8 = 6
sys_perf_event_open: pid -1 cpu 3 group_fd -1 flags 0x8 = 7
sys_perf_event_open: pid -1 cpu 4 group_fd -1 flags 0x8 = 8
sys_perf_event_open: pid -1 cpu 5 group_fd -1 flags 0x8 = 9
sys_perf_event_open: pid -1 cpu 6 group_fd -1 flags 0x8 = 10
sys_perf_event_open: pid -1 cpu 7 group_fd -1 flags 0x8 = 11
mmap size 528384B
perf event ring buffer mmapped per cpu
Synthesizing TSC conversion information
Powered by blists - more mailing lists