[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <39fe6abc-5c3e-bac3-0c0b-cf68bea23ab0@fb.com>
Date: Wed, 7 Nov 2018 00:13:56 +0000
From: Alexei Starovoitov <ast@...com>
To: David Miller <davem@...emloft.net>
CC: Song Liu <songliubraving@...com>,
"dsahern@...il.com" <dsahern@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
"ast@...nel.org" <ast@...nel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"peterz@...radead.org" <peterz@...radead.org>,
"acme@...nel.org" <acme@...nel.org>
Subject: Re: [RFC perf,bpf 5/5] perf util: generate bpf_prog_info_event for
short living bpf programs
On 11/6/18 3:36 PM, David Miller wrote:
> From: Alexei Starovoitov <ast@...com>
> Date: Tue, 6 Nov 2018 23:29:07 +0000
>
>> I think concerns with perf overhead from collecting bpf events
>> are unfounded.
>> I would prefer for this flag to be on by default.
>
> I will sit in userspace looping over bpf load/unload and see how the
> person trying to monitor something else with perf feels about that.
>
> Really, it is inappropriate to turn this on by default, I completely
> agree with David Ahern.
>
> It's hard enough, _AS IS_, for me to fight back all of the bloat that
> is in perf right now and get it back to being able to handle simple
> full workloads without dropping events..
It's a separate perf thread and separate event with its own epoll.
I don't see how it can affect main event collection.
Let's put it this way. If it does affect somehow, then yes,
it should not be on. If it is not, there is no downside to keep it on.
Typical user expects to type 'perf record' and see everything that
is happening on the system. Right now short lived bpf programs
will not be seen. How user suppose to even know when to use the flag?
The only option is to always pass the flag 'just in case'
which is unnecessary burden.
The problem of dropped events is certainly valid, but it's
a separate issue. The aio stuff that Alexey Budankov is working on
suppose to address that.
Powered by blists - more mailing lists