lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 2 Mar 2010 11:53:14 +0100
From:	Robert Richter <robert.richter@....com>
To:	eranian@...gle.com
CC:	linux-kernel@...r.kernel.org, peterz@...radead.org, mingo@...e.hu,
	paulus@...ba.org, fweisbec@...il.com, perfmon2-devel@...ts.sf.net,
	eranian@...il.com
Subject: Re: [PATCH] perf_events: add sampling period randomization support

On 01.03.10 22:07:09, eranian@...gle.com wrote:
> This patch adds support for randomizing the sampling period.
> Randomization is very useful to mitigate the bias that exists
> with sampling. The random number generator does not need to
> be sophisticated. This patch uses the builtin random32()
> generator.
> 
> The user activates randomization by setting the perf_event_attr.random
> field to 1 and by passing a bitmask to control the range of variation
> above the base period. Period will vary from period to period & mask.
> Note that randomization is not available when a target interrupt rate
> (freq) is enabled.

Instead of providing a mask I would prefer to either use a bit width
parameter there the mask can be calculated from or to specify a range
the period may vary.

> 
> The last used period can be collected using the PERF_SAMPLE_PERIOD flag
> in sample_type.
> 
> The patch has been tested on X86. There is also code for PowerPC but
> I could not test it.
> 
> 	Signed-off-by: Stephane Eranian <eranian@...gle.com>
> 
> --
>  arch/powerpc/kernel/perf_event.c       |    3 +++
>  arch/x86/kernel/cpu/perf_event.c       |    2 ++
>  arch/x86/kernel/cpu/perf_event_intel.c |    4 ++++

I agree with Peter, I also don't see the need to touch arch specific
code.

>  include/linux/perf_event.h             |    7 +++++--
>  kernel/perf_event.c                    |   24 ++++++++++++++++++++++++
>  5 files changed, 38 insertions(+), 2 deletions(-)
> 

[...]

> +void perf_randomize_event_period(struct perf_event *event)
> +{
> +	u64 new_seed;
> +	u64 mask = event->attr.random_mask;
> +
> +	event->hw.last_period = event->hw.sample_period;
> +
> +	new_seed = random32();
> +
> +	if (unlikely(mask >> 32))
> +		new_seed |= (u64)random32() << 32;
> +
> +	event->hw.sample_period = event->attr.sample_period + (new_seed & mask);

Only adding the random value will lead to longer sample periods on
average. To compensate this you could calculate something like:

	 event->hw.sample_period = event->attr.sample_period + (new_seed & mask) - (mask >> 1);

Or, the offset is already in sample_period.

Also a range check for sample_period is necessary to avoid over- or
underflows.

-Robert

> +}

-- 
Advanced Micro Devices, Inc.
Operating System Research Center
email: robert.richter@....com

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ