lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 May 2009 18:01:56 +0200 (CEST)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Jon Hunter <jon-hunter@...com>
cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	john stultz <johnstul@...ibm.com>, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH 1/2] Dynamic Tick: Prevent clocksource wrapping during
 idle

On Wed, 27 May 2009, Jon Hunter wrote:
>  /**
> + * timekeeping_max_deferment - Returns max time the clocksource can be
> deferred
> + *
> + * IMPORTANT: Must be called with xtime_lock held!

  No, that would mean that xtime_lock needs to be write locked. And we
  definitely do not want that.

  The caller needs to observe xtime_lock via read_seqbegin /
  read_seqretry because clock might change.

> + */
> +s64 timekeeping_max_deferment(void)
> +{
> +	s64 max_nsecs;
> +	u64 max_cycles;
> +
> +	/*
> +	 * Calculate the maximum number of cycles that we can pass to the
> +	 * cyc2ns function without overflowing a 64-bit signed result. The
> +	 * maximum number of cycles is equal to ULLONG_MAX/clock->mult which
> +	 * is equivalent to the below.
> +	 * max_cycles < (2^63)/clock->mult
> +	 * max_cycles < 2^(log2((2^63)/clock->mult))
> +	 * max_cycles < 2^(log2(2^63) - log2(clock->mult))
> +	 * max_cycles < 2^(63 - log2(clock->mult))
> +	 * max_cycles < 1 << (63 - log2(clock->mult))
> +	 * Please note that we add 1 to the result of the log2 to account for
> +	 * any rounding errors, ensure the above inequality is satisfied and
> +	 * no overflow will occur.
> +	 */
> +	max_cycles = 1ULL << (63 - (ilog2(clock->mult) + 1));
> +
> +	/*
> +	 * The actual maximum number of cycles we can defer the clocksource is
> +	 * determined by the minimum of max_cycles and clock->mask.
> +	 */
> +	max_cycles = min(max_cycles, clock->mask);
> +	max_nsecs = cyc2ns(clock, max_cycles);

  Why do you want to recalculate the whole stuff over and over ?

  That computation can be done when the clock source is initialized or
  any fundamental change of the clock parameters happens.

  Stick that value into the clocksource struct and just read it out.

> +	/*
> +	 * To ensure that the clocksource does not wrap whilst we are idle,
> +	 * limit the time the clocksource can be deferred by 6.25%. Please
> +	 * note a margin of 6.25% is used because this can be computed with
> +	 * a shift, versus say 5% which would require division.
> +	 */
> +	max_nsecs = max_nsecs - (max_nsecs >> 4);
> +
> +	if (max_nsecs < 0)
> +		max_nsecs = 0;

  How does "max_nsecs = max_nsecs - (max_nsecs >> 4)" ever become
  negative ?

Thanks,

	tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ