[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1601201015000.3575@nanos>
Date: Wed, 20 Jan 2016 10:21:53 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Jeff Merkey <linux.mdb@...il.com>
cc: LKML <linux-kernel@...r.kernel.org>,
John Stultz <john.stultz@...aro.org>
Subject: Re: [BUG REPORT] ktime_get_ts64 causes Hard Lockup
On Tue, 19 Jan 2016, Jeff Merkey wrote:
> Nasty bug but trivial fix for this. What happens here is RAX (nsecs)
> gets set to a huge value (RAX = 0x17AE7F57C671EA7D) and passed through
And how exactly does that happen?
0x17AE7F57C671EA7D = 1.70644e+18 nsec
= 1.70644e+09 sec
= 2.84407e+07 min
= 474011 hrs
= 19750.5 days
= 54.1109 years
That's the real issue, not what you are trying to 'fix' in timespec_add_ns()
> Submitting a patch to fix this after I regress and test it. Since it
> makes no sense to loop on a simple calculation, fix should be:
>
> static __always_inline void timespec_add_ns(struct timespec *a, u64 ns)
> {
> a->tv_sec += div64_u64_rem(a->tv_nsec + ns, NSEC_PER_SEC, &ns);
> a->tv_nsec = ns;
> }
No. It's not that simple, because div64_u64_rem() is expensive on 32bit
architectures which have no hardware 64/32 division. And that's going to hurt
for the normal tick case where we have at max one iteration.
Thanks,
tglx
Powered by blists - more mailing lists