[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <edda462f-a32c-bc25-94c8-8d06a47bd480@fb.com>
Date: Wed, 7 Nov 2018 01:09:28 +0000
From: Alexei Starovoitov <ast@...com>
To: David Ahern <dsahern@...il.com>, David Miller <davem@...emloft.net>
CC: Song Liu <songliubraving@...com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
"ast@...nel.org" <ast@...nel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"peterz@...radead.org" <peterz@...radead.org>,
"acme@...nel.org" <acme@...nel.org>
Subject: Re: [RFC perf,bpf 5/5] perf util: generate bpf_prog_info_event for
short living bpf programs
On 11/6/18 4:44 PM, David Ahern wrote:
>
> So one use case is profiling bpf programs. I was also considering the
> auditing discussion from some weeks ago which I thought the events are
> also targeting.
yes. there should be separate mode for 're: audit discussion' where
only bpf events are collected. This patch set doesn't add that to
perf user space side.
The kernel side is common though. It can be used for bpf load/unload
only and for different use case in this set. Which is making
bpf program appear in normal 'perf report'.
Please see link in cover letter.
We decided to abandon my old approach in favor of this one,
but the end result is the same.
From 0.81% cpu of some magic 0x00007fffa001a660
into 18.13% of bpf_prog_1accc788e7f04c38_balancer_ingres
> As for the overhead, I did not see a separate thread getting spun off
> for the bpf events, so the events are processed inline for this RFC set.
argh. you're right. we have to fix that.
Powered by blists - more mailing lists