lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <561EC747.8070608@plumgrid.com>
Date:	Wed, 14 Oct 2015 14:21:11 -0700
From:	Alexei Starovoitov <ast@...mgrid.com>
To:	Kaixu Xia <xiakaixu@...wei.com>, davem@...emloft.net,
	acme@...nel.org, mingo@...hat.com, a.p.zijlstra@...llo.nl,
	masami.hiramatsu.pt@...achi.com, jolsa@...nel.org,
	daniel@...earbox.net
Cc:	wangnan0@...wei.com, linux-kernel@...r.kernel.org,
	pi3orama@....com, hekuang@...wei.com, netdev@...r.kernel.org
Subject: Re: [PATCH V2 1/2] bpf: control the trace data output on current cpu
 when perf sampling

On 10/14/15 5:37 AM, Kaixu Xia wrote:
> This patch adds the flag sample_disable to control the trace data
> output process when perf sampling. By setting this flag and
> integrating with ebpf, we can control the data output process and
> get the samples we are most interested in.
>
> The bpf helper bpf_perf_event_sample_control() can control the
> perf_event on current cpu.
>
> Signed-off-by: Kaixu Xia <xiakaixu@...wei.com>
...
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -6337,6 +6337,9 @@ static int __perf_event_overflow(struct perf_event *event,
>   		irq_work_queue(&event->pending);
>   	}
>
> +	if (!atomic_read(&event->sample_disable))
> +		return ret;
> +

the condition check and the name are inconsistent.
It's either
if (!enabled) return
or
if (disabled) return

>   	if (event->overflow_handler)
>   		event->overflow_handler(event, data, regs);
>   	else
> @@ -7709,6 +7712,14 @@ static void account_event(struct perf_event *event)
>   	account_event_cpu(event, event->cpu);
>   }
>
> +static void perf_event_check_sample_flag(struct perf_event *event)
> +{
> +	if (event->attr.sample_disable == 1)
> +		atomic_set(&event->sample_disable, 0);
> +	else
> +		atomic_set(&event->sample_disable, 1);
> +}

why introduce new attribute for this?
we already have 'disabled' flag.

> +static u64 bpf_perf_event_sample_control(u64 r1, u64 index, u64 flag, u64 r4, u64 r5)
> +{
> +	struct bpf_map *map = (struct bpf_map *) (unsigned long) r1;
> +	struct bpf_array *array = container_of(map, struct bpf_array, map);
> +	struct perf_event *event;
> +
> +	if (unlikely(index >= array->map.max_entries))
> +		return -E2BIG;
> +
> +	event = (struct perf_event *)array->ptrs[index];
> +	if (!event)
> +		return -ENOENT;
> +
> +	if (flag)

please check only bit 0 and check that all other bits are zero as well
for future extensibility.

> +		atomic_dec(&event->sample_disable);

it should be atomic_dec_if_positive();

> +	else
> +		atomic_inc(&event->sample_disable);

and atomic_add_unless()
to make sure we don't wrap on either side.

> +const struct bpf_func_proto bpf_perf_event_sample_control_proto = {

static.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ