[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <56246B83.8080904@huawei.com>
Date: Mon, 19 Oct 2015 12:03:15 +0800
From: xiakaixu <xiakaixu@...wei.com>
To: Alexei Starovoitov <ast@...mgrid.com>
CC: <davem@...emloft.net>, <acme@...nel.org>, <mingo@...hat.com>,
<a.p.zijlstra@...llo.nl>, <masami.hiramatsu.pt@...achi.com>,
<jolsa@...nel.org>, <daniel@...earbox.net>, <wangnan0@...wei.com>,
<linux-kernel@...r.kernel.org>, <pi3orama@....com>,
<hekuang@...wei.com>, <netdev@...r.kernel.org>
Subject: Re: [PATCH V3 1/2] bpf: control the trace data output on current
cpu when perf sampling
δΊ 2015/10/17 6:06, Alexei Starovoitov ει:
> On 10/16/15 12:42 AM, Kaixu Xia wrote:
>> This patch adds the flag dump_enable to control the trace data
>> output process when perf sampling. By setting this flag and
>> integrating with ebpf, we can control the data output process and
>> get the samples we are most interested in.
>>
>> The bpf helper bpf_perf_event_dump_control() can control the
>> perf_event on current cpu.
>>
>> Signed-off-by: Kaixu Xia <xiakaixu@...wei.com>
>> ---
>> include/linux/perf_event.h | 1 +
>> include/uapi/linux/bpf.h | 5 +++++
>> include/uapi/linux/perf_event.h | 3 ++-
>> kernel/bpf/verifier.c | 3 ++-
>> kernel/events/core.c | 13 ++++++++++++
>> kernel/trace/bpf_trace.c | 44 +++++++++++++++++++++++++++++++++++++++++
>> 6 files changed, 67 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>> index 092a0e8..2af527e 100644
>> --- a/include/linux/perf_event.h
>> +++ b/include/linux/perf_event.h
>> @@ -472,6 +472,7 @@ struct perf_event {
>> struct irq_work pending;
>>
>> atomic_t event_limit;
>> + atomic_t dump_enable;
>
> The naming is the hardest...
> How about calling it 'soft_enable' instead?
>
>> --- a/include/uapi/linux/bpf.h
>> +++ b/include/uapi/linux/bpf.h
>> @@ -287,6 +287,11 @@ enum bpf_func_id {
>> * Return: realm if != 0
>> */
>> BPF_FUNC_get_route_realm,
>> +
>> + /**
>> + * u64 bpf_perf_event_dump_control(&map, index, flag)
>> + */
>> + BPF_FUNC_perf_event_dump_control,
>
> and this one is too long.
> May be bpf_perf_event_control() ?
>
> Daniel, any thoughts on naming?
>
>> --- a/include/uapi/linux/perf_event.h
>> +++ b/include/uapi/linux/perf_event.h
>> @@ -331,7 +331,8 @@ struct perf_event_attr {
>> comm_exec : 1, /* flag comm events that are due to an exec */
>> use_clockid : 1, /* use @clockid for time fields */
>> context_switch : 1, /* context switch data */
>> - __reserved_1 : 37;
>> + dump_enable : 1, /* don't output data on samples */
>
> either comment or name is wrong.
> how about calling this one 'soft_disable',
> since you want zero to be default and the event should be on.
>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index b11756f..74a16af 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -6337,6 +6337,9 @@ static int __perf_event_overflow(struct perf_event *event,
>> irq_work_queue(&event->pending);
>> }
>>
>> + if (!atomic_read(&event->dump_enable))
>> + return ret;
>
> I'm not an expert in this piece of perf, but should it be 'return 0'
> instead ?
> and may be moved to is_sampling_event() check?
> Also please add unlikely().
The is_sampling_event() is checked in many other kernel places, not only in
perf events interrupt overflow handle. I'm not sure it is fine if we move it
to there. In addition, I think hwc->interrupts++ should be done in
__perf_event_overflow() before event->soft_enable is checked.
>
>> +static void perf_event_check_dump_flag(struct perf_event *event)
>> +{
>> + if (event->attr.dump_enable == 1)
>> + atomic_set(&event->dump_enable, 1);
>> + else
>> + atomic_set(&event->dump_enable, 0);
>
> that looks like it breaks perf, since default for bits is zero
> and all events will be soft-disabled?
> How did you test it?
> Please add a test to samples/bpf/ for this feature.
>
>
> .
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists