lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 5 May 2015 15:07:23 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Kan Liang <kan.liang@...el.com>
Cc:	mingo@...nel.org, acme@...radead.org, eranian@...gle.com,
	andi@...stfloor.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V7 3/6] perf, x86: handle multiple records in PEBS buffer

On Mon, Apr 20, 2015 at 04:07:47AM -0400, Kan Liang wrote:
> From: Yan, Zheng <zheng.z.yan@...el.com>

<snip>

> Here lists some possible ways you may get a lot of collision.

This is the first time the world 'collisions' is used; either define
what you mean by it or avoid using it.

>   - when you count the same thing multiple times. But it is not a useful
>     configuration.
>   - you can be unfortunate if you measure with a userspace only PEBS
>     event along with either a kernel or unrestricted PEBS event. Imagine
>     the event triggering and setting the overflow flag right before
>     entering the kernel. Then all kernel side events will end up with
>     multiple bits set.
> 
> Here are some numbers about collisions.
> Four frequently occurring events
> (cycles:p,instructions:p,branches:p,mem-stores:p) are tested
> 
> Test events which are sampled together                   collision rate
> cycles:p,instructions:p                                  0.25%
> cycles:p,instructions:p,branches:p                       0.30%
> cycles:p,instructions:p,branches:p,mem-stores:p          0.35%
> 
> cycles:p,cycles:p                                        98.52%

It would be good if you can illustrate this with the new PREF_RECORD and
the perf tool itself.

> Signed-off-by: Yan, Zheng <zheng.z.yan@...el.com>
> Signed-off-by: Kan Liang <kan.liang@...el.com>
> ---

> --- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c

> @@ -958,19 +961,97 @@ static void setup_pebs_sample_data(struct perf_event *event,
>  		data->br_stack = &cpuc->lbr_stack;
>  }
>  
> +static void perf_log_lost(struct perf_event *event)
> +{
> +	struct perf_output_handle handle;
> +	struct perf_sample_data sample;
> +	int ret;
> +
> +	struct {
> +		struct perf_event_header	header;
> +		u64				id;
> +		u64				lost;
> +	} lost_event = {
> +		.header = {
> +			.type = PERF_RECORD_LOST,
> +			.misc = 0,
> +			.size = sizeof(lost_event),
> +		},
> +		.id		= event->id,
> +		.lost		= 1,
> +	};
> +
> +	perf_event_header__init_id(&lost_event.header, &sample, event);
> +
> +	ret = perf_output_begin(&handle, event,
> +				lost_event.header.size);
> +	if (ret)
> +		return;
> +
> +	perf_output_put(&handle, lost_event);
> +	perf_event__output_id_sample(event, &handle, &sample);
> +	perf_output_end(&handle);
> +}

RECORDs are generic, and should live in the core code.

Also, you should introduce this RECORD in a separate patch.

Ideally, you'd also update the tools side to parse this and modify
perf-record to show the number of dropped events as a percentage, going
warn/error when >1%/>5% or so?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists