[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK8P3a2KxEs9OweVjV9fWTZy_mptU=Yym_1qT6Vot4TSoKu5yw@mail.gmail.com>
Date: Fri, 26 May 2017 11:06:58 +0200
From: Arnd Bergmann <arnd@...db.de>
To: Palmer Dabbelt <palmer@...belt.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Olof Johansson <olof@...om.net>, albert@...ive.com
Subject: Re: [PATCH 5/7] RISC-V: arch/riscv/lib
On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <palmer@...belt.com> wrote:
> On Tue, 23 May 2017 04:19:42 PDT (-0700), Arnd Bergmann wrote:
>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@...belt.com> wrote:
>> Also, it would be good to replace the multiply+div64
>> with a single multiplication here, see how x86 and arm do it
>> (for the tsc/__timer_delay case).
>
> Makes sense. I think this should do it
>
> https://github.com/riscv/riscv-linux/commit/d397332f6ebff42f3ecb385e9cf3284fdeda6776
>
> but I'm finding this hard to test as this only works for 2ms sleeps. It seems
> at least in the right ballpark
+ if (usecs > MAX_UDELAY_US) {
+ __delay((u64)usecs * riscv_timebase / 1000000ULL);
+ return;
+ }
You still do the 64-bit division here. What I meant is to completely
avoid the division and use a multiply+shift.
Also, you don't need to base anything on HZ, as you do not rely
on the delay calibration but always use a timer.
Arnd
Powered by blists - more mailing lists