lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <513F9974.8010909@linaro.org>
Date:	Tue, 12 Mar 2013 14:09:08 -0700
From:	John Stultz <john.stultz@...aro.org>
To:	Feng Tang <feng.tang@...el.com>
CC:	Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...e.hu>,
	"H. Peter Anvin" <hpa@...ux.intel.com>,
	Jason Gunthorpe <jgunthorpe@...idianresearch.com>,
	x86@...nel.org, Len Brown <lenb@...nel.org>,
	"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
	linux-kernel@...r.kernel.org, gong.chen@...ux.intel.com
Subject: Re: [PATCH v4 4/4] timekeeping: utilize the suspend-nonstop clocksource
 to count suspended time

On 03/11/2013 08:56 PM, Feng Tang wrote:
> +	/*
> +	 * After system resumes, we need to calculate the suspended time and
> +	 * compensate it for the OS time. There are 3 sources that could be
> +	 * used: Nonstop clocksource during suspend, persistent clock and rtc
> +	 * device.
> +	 *
> +	 * One specific platform may have 1 or 2 or all of them, and the
> +	 * preference will be:
> +	 *	suspend-nonstop clocksource -> persistent clock -> rtc
> +	 * The less preferred source will only be tried if there is no better
> +	 * usable source. The rtc part is handled separately in rtc core code.
> +	 */
> +	cycle_now = clock->read(clock);
> +	if ((clock->flags & CLOCK_SOURCE_SUSPEND_NONSTOP) &&
> +		cycle_now > clock->cycle_last) {
> +		u64 num, max = ULLONG_MAX;
> +		u32 mult = clock->mult;
> +		u32 shift = clock->shift;
> +		s64 nsec = 0;
> +
> +		cycle_delta = (cycle_now - clock->cycle_last) & clock->mask;
> +
> +		/*
> +		 * "cycle_delta * mutl" may cause 64 bits overflow, if the
> +		 * suspended time is too long. In that case we need do the
> +		 * 64 bits math carefully
> +		 */
> +		do_div(max, mult);
> +		if (cycle_delta > max) {
> +			num = div64_u64(cycle_delta, max);
> +			nsec = (((u64) max * mult) >> shift) * num;
> +			cycle_delta -= num * max;
> +		}
> +		nsec += ((u64) cycle_delta * mult) >> shift;
> +
> +		ts_delta = ns_to_timespec(nsec);
> +		suspendtime_found = true;
> +	} else if (timespec_compare(&ts_new, &timekeeping_suspend_time) > 0) {
> +		ts_delta = timespec_sub(ts_new, timekeeping_suspend_time);
> +		suspendtime_found = true;
>   	}
> -	/* re-base the last cycle value */
> -	tk->clock->cycle_last = tk->clock->read(tk->clock);
> +
> +	if (suspendtime_found)
> +		__timekeeping_inject_sleeptime(tk, &ts_delta);
> +
> +	/* Re-base the last cycle value */
> +	clock->cycle_last = clock->read(clock);
It seems like since we unconditionally read the clock above, this last 
line could be reworked to be:
     clock->cycle_last = cycle_now;

Which would save re-reading the clocksource.

If you don't have any objections I'll fold that small change into your 
patch.

thanks
-john

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ