lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1303061701130.22263@ionos>
Date:	Wed, 6 Mar 2013 17:10:53 +0100 (CET)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Feng Tang <feng.tang@...el.com>
cc:	John Stultz <john.stultz@...aro.org>, Ingo Molnar <mingo@...e.hu>,
	"H. Peter Anvin" <hpa@...ux.intel.com>,
	Jason Gunthorpe <jgunthorpe@...idianresearch.com>,
	x86@...nel.org, Len Brown <lenb@...nel.org>,
	"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
	linux-kernel@...r.kernel.org, gong.chen@...ux.intel.com
Subject: Re: [PATCH v3 4/5] clocksource: Enable clocksource_cyc2ns() to cover
 big cycles

On Wed, 6 Mar 2013, Feng Tang wrote:
> Hi Thomas,
> 
> Thanks for the reviews.
> 
> On Wed, Mar 06, 2013 at 03:09:26PM +0100, Thomas Gleixner wrote:
> > On Wed, 6 Mar 2013, Feng Tang wrote:
> > 
> > > Current clocksource_cyc2ns() has a implicit limit that the (cycles * mult)
> > > can not exceed 64 bits limit. Jason Gunthorpe proposed a way to
> > > handle this big cycles case, and this patch put the handling into
> > > clocksource_cyc2ns() so that it could be used unconditionally.
> > 
> > Could be used if it wouldn't break the world and some more.
> 
> Exactly.
> 
> One excuse I can think of is usually the clocksource_cyc2ns() will be called
> for cycles less than 600 seconds, based on which the "mult" and "shift" are
> calculated for a clocksource.

That's not an excuse for making even the build fail on ARM and other
32bit archs.
 
> > >  static inline s64 clocksource_cyc2ns(cycle_t cycles, u32 mult, u32 shift)
> > >  {
> > > -	return ((u64) cycles * mult) >> shift;
> > > +	u64 max = ULLONG_MAX / mult;
> > 
> > This breaks everything which does not have a 64/32bit divide
> > instruction. And you can't replace it with do_div() as that would
> > impose massive overhead on those architectures in the fast path.
> 
> I thought about this once. And in my v2 patch, I used some code like
> 
> +		/*
> +		 * The system suspended time and the delta cycles may be very
> +		 * long, so we can't call clocksource_cyc2ns() directly with
> +		 * clocksource's default mult and shift to avoid overflow.
> +		 */
> +		max_cycles = 1ULL << (63 - (ilog2(mult) + 1));
> +		while (cycle_delta > max_cycles) {
> +			max_cycles <<= 1;
> +			mult >>= 1;
> +			shift--;
> +		}
> +
> 
> trying to avoid expensieve maths. But as Jason pointed, there is some accuracy
> lost. 

Right, but if you precalculate the max_fast_cycles value you can avoid
at least the division in the fast path and then do

   if (cycles > max_fast_cycles)
      return clocksource_cyc2ns_slow();
   return ((u64) cycles * mult) >> shift;	      

clocksource_cyc2ns_slow() should be out of line and there you can do
all the slow 64 bit operations. That keeps the fast path sane and we
don't need extra magic for the large cycle values.

Thanks,

	tglx

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ