lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160120190205.GR6357@twins.programming.kicks-ass.net>
Date:	Wed, 20 Jan 2016 20:02:05 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Daniel Lezcano <daniel.lezcano@...aro.org>
Cc:	tglx@...utronix.de, rafael@...nel.org, linux-pm@...r.kernel.org,
	linux-kernel@...r.kernel.org, nicolas.pitre@...aro.org,
	vincent.guittot@...aro.org
Subject: Re: [RFC V2 2/2] sched: idle: IRQ based next prediction for idle
 period

On Wed, Jan 20, 2016 at 05:00:33PM +0100, Daniel Lezcano wrote:
> +static void sched_irq_timing_handler(unsigned int irq, ktime_t timestamp, void *dev_id)
> +{
> +	u32 diff;
> +	unsigned int cpu = raw_smp_processor_id();
> +	struct wakeup *w = per_cpu(wakeups[irq], cpu);
> +
> +	/*
> +	 * It is the first time the interrupt occurs of the series, we
> +	 * can't do any stats as we don't have an interval, just store
> +	 * the timestamp and exit.
> +	 */
> +	if (ktime_equal(w->timestamp, ktime_set(0, 0))) {
> +		w->timestamp = timestamp;
> +		return;
> +	}
> +
> +	/*
> +	 * Microsec resolution is enough for our purpose.
> +	 */

It is also a friggin pointless /1000. The cpuidle code also loves to do
this, and its silly, u64 add/sub are _way_ cheaper than u64 / 1000.

> +	diff = ktime_us_delta(timestamp, w->timestamp);
> +	w->timestamp = timestamp;
> +
> +	/*
> +	 * There is no point attempting predictions on interrupts more
> +	 * than ~1 second apart. This has no benefit for sleep state
> +	 * selection and increases the risk of overflowing our variance
> +	 * computation. Reset all stats in that case.
> +	 */
> +	if (diff > (1 << 20)) {
> +		stats_reset(&w->stats);
> +		return;
> +	}
> +
> +	stats_add(&w->stats, diff);
> +}
> +
> +static ktime_t next_irq_event(void)
> +{
> +	unsigned int irq, cpu = raw_smp_processor_id();
> +	ktime_t diff, next, min = ktime_set(KTIME_SEC_MAX, 0);
> +	ktime_t now = ktime_get();

Why !?! do we care about NTP correct timestamps?

ktime_get() can be horrendously slow, don't use it for statistics.

> +		next = ktime_add_us(w->timestamp, stats_mean(&w->stats));
> +	s64 next_timer = ktime_to_us(tick_nohz_get_sleep_length());
> +	s64 next_irq = ktime_to_us(next_irq_event());

more nonsense, just say no.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ