lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 6 Mar 2013 15:09:26 +0100 (CET)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Feng Tang <feng.tang@...el.com>
cc:	John Stultz <john.stultz@...aro.org>, Ingo Molnar <mingo@...e.hu>,
	"H. Peter Anvin" <hpa@...ux.intel.com>,
	Jason Gunthorpe <jgunthorpe@...idianresearch.com>,
	x86@...nel.org, Len Brown <lenb@...nel.org>,
	"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
	linux-kernel@...r.kernel.org, gong.chen@...ux.intel.com
Subject: Re: [PATCH v3 4/5] clocksource: Enable clocksource_cyc2ns() to cover
 big cycles

On Wed, 6 Mar 2013, Feng Tang wrote:

> Current clocksource_cyc2ns() has a implicit limit that the (cycles * mult)
> can not exceed 64 bits limit. Jason Gunthorpe proposed a way to
> handle this big cycles case, and this patch put the handling into
> clocksource_cyc2ns() so that it could be used unconditionally.

Could be used if it wouldn't break the world and some more.

> Suggested-by: Jason Gunthorpe <jgunthorpe@...idianresearch.com>
> Signed-off-by: Feng Tang <feng.tang@...el.com>
> ---
>  include/linux/clocksource.h |   11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
> index aa7032c..1ecc872 100644
> --- a/include/linux/clocksource.h
> +++ b/include/linux/clocksource.h
> @@ -274,7 +274,16 @@ static inline u32 clocksource_hz2mult(u32 hz, u32 shift_constant)
>   */
>  static inline s64 clocksource_cyc2ns(cycle_t cycles, u32 mult, u32 shift)
>  {
> -	return ((u64) cycles * mult) >> shift;
> +	u64 max = ULLONG_MAX / mult;

This breaks everything which does not have a 64/32bit divide
instruction. And you can't replace it with do_div() as that would
impose massive overhead on those architectures in the fast path.

The max value can be precalculated and stored in the timekeeper
struct. We really do not want expensive calculations in the fast path.

> +	s64 nsec = 0;
> +
> +	/* The (mult * cycles) may overflow 64 bits, so add a max check */
> +	if (cycles > max) {
> +		nsec = ((max * mult) >> shift) * (cycles / max);

This breaks everything which does not have a 64/64bit divide instruction.
 
> +		cycles %= max;

Ditto.

As this is the slow path on resume you can use the 64bit functions
declared in math64.h. And you want to put the slow path out of line.

There is a world outside of x86!

Thanks,

	tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ