lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1601201754370.3575@nanos>
Date:	Wed, 20 Jan 2016 18:21:55 +0100 (CET)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Jeff Merkey <linux.mdb@...il.com>
cc:	LKML <linux-kernel@...r.kernel.org>,
	John Stultz <john.stultz@...aro.org>
Subject: Re: [BUG REPORT] ktime_get_ts64 causes Hard Lockup

On Wed, 20 Jan 2016, Jeff Merkey wrote:
> On 1/20/16, Thomas Gleixner <tglx@...utronix.de> wrote:
> > On Tue, 19 Jan 2016, Jeff Merkey wrote:
> >> Nasty bug but trivial fix for this.  What happens here is RAX (nsecs)
> >> gets set to a huge value (RAX = 0x17AE7F57C671EA7D) and passed through
> >
> > And how exactly does that happen?
> >
> > 0x17AE7F57C671EA7D = 1.70644e+18  nsec
> > 		   = 1.70644e+09  sec
> > 		   = 2.84407e+07  min
> > 		   = 474011	  hrs
> > 		   = 19750.5	  days
> > 		   = 54.1109	  years
> >
> > That's the real issue, not what you are trying to 'fix' in
> > timespec_add_ns()
> >
> >> Submitting a patch to fix this after I regress and test it.   Since it
> >> makes no sense to loop on a simple calculation, fix should be:
> >>
> >> static __always_inline void timespec_add_ns(struct timespec *a, u64 ns)
> >> {
> >> 	a->tv_sec += div64_u64_rem(a->tv_nsec + ns, NSEC_PER_SEC, &ns);
> >> 	a->tv_nsec = ns;
> >> }
> >
> > No. It's not that simple, because div64_u64_rem() is expensive on 32bit
> > architectures which have no hardware 64/32 division. And that's going to
> > hurt
> > for the normal tick case where we have at max one iteration.
> >
> 
> It's less expensive than a hard coded loop that subtracts in a looping
> function as a substitute for dividing which is what is there.  What a
> busted piece of shit .... LOL

Let's talk about shit.

timespec[64]_add_ns() is used for timekeeping and in all normal use cases the
nsec part is less than 1e9 nsec. Even on 64 bit a divide is more expensive
than the sinlge iteration while loop and its insane expensive on 32bit
machines which do not have a 64/32 divison in hardware.

The while loop is there for a few corner cases which are a bit larger than 1e9
nsecs, but that's not the case we optimize for.

The case you are creating with your debugger is something completely different
and we never thought about it nor cared about it. Why? Because so far nobody
complained and I never cared about kernel debuggers at all.

What's worse is that your 'fix' does not resolve the underlying issue at
all. Why? Simply because you tried to fix the symptom and not the root cause.

I explained you the root cause and I explained you why that while() loop is
more efficient than a divide for the case it was written and optimized for.

Instead of reading and understanding what I wrote you teach me that your
divide is more efficient and call it a busted piece of shit.

Sure you are free to call that a busted piece of shit, but you don't have to
expect that the people who wrote, maintain and understand that code are going
to put up with your attitude.

Thanks,

	tglx



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ