[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAO6TR8UnGPr+mLrOwk+Zc1zEDVa7K=igX48U0-N+ZBoMo66PuA@mail.gmail.com>
Date: Wed, 20 Jan 2016 09:53:07 -0700
From: Jeff Merkey <linux.mdb@...il.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
John Stultz <john.stultz@...aro.org>
Subject: Re: [BUG REPORT] ktime_get_ts64 causes Hard Lockup
On 1/20/16, Thomas Gleixner <tglx@...utronix.de> wrote:
> On Tue, 19 Jan 2016, Jeff Merkey wrote:
>> Nasty bug but trivial fix for this. What happens here is RAX (nsecs)
>> gets set to a huge value (RAX = 0x17AE7F57C671EA7D) and passed through
>
> And how exactly does that happen?
>
> 0x17AE7F57C671EA7D = 1.70644e+18 nsec
> = 1.70644e+09 sec
> = 2.84407e+07 min
> = 474011 hrs
> = 19750.5 days
> = 54.1109 years
>
> That's the real issue, not what you are trying to 'fix' in
> timespec_add_ns()
>
>> Submitting a patch to fix this after I regress and test it. Since it
>> makes no sense to loop on a simple calculation, fix should be:
>>
>> static __always_inline void timespec_add_ns(struct timespec *a, u64 ns)
>> {
>> a->tv_sec += div64_u64_rem(a->tv_nsec + ns, NSEC_PER_SEC, &ns);
>> a->tv_nsec = ns;
>> }
>
> No. It's not that simple, because div64_u64_rem() is expensive on 32bit
> architectures which have no hardware 64/32 division. And that's going to
> hurt
> for the normal tick case where we have at max one iteration.
>
It's less expensive than a hard coded loop that subtracts in a looping
function as a substitute for dividing which is what is there. What a
busted piece of shit .... LOL
> Thanks,
>
> tglx
>
>
>
Powered by blists - more mailing lists