[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFyC_koMROzCQTqzsKMSamHCmhna2bUc8UHW_WcRNO0bMg@mail.gmail.com>
Date: Thu, 5 Feb 2015 10:20:29 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Anshul Garg <aksgarg1989@...il.com>
Cc: Davidlohr Bueso <dave@...olabs.net>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"anshul.g@...sung.com" <anshul.g@...sung.com>
Subject: Re: [PATCH] lib/int_sqrt.c: Optimize square root function
On Tue, Feb 3, 2015 at 7:42 AM, Anshul Garg <aksgarg1989@...il.com> wrote:
>
> I have done profiling of int_sqrt function using perf tool for 10 times.
> For this purpose i have created a userspace program which uses sqrt function
> from 1 to a million.
Hmm. I did that too, and it doesn't improve things for me. In fact, it
makes it slower.
[torvalds@i7 ~]$ gcc -Wall -O2 -DREDUCE int_sqrt.c ; time ./a.out
real 0m2.098s
user 0m2.095s
sys 0m0.000s
[torvalds@i7 ~]$ gcc -Wall -O2 int_sqrt.c ; time ./a.out
real 0m1.886s
user 0m1.883s
sys 0m0.000s
and the profile shows that 35% of the time is spent on that branch
back of the initial reduction loop.
In contrast, my suggested "reduce just once" does seem to improve things:
[torvalds@i7 ~]$ gcc -Wall -O2 -DONCE int_sqrt.c ; time ./a.out
real 0m1.436s
user 0m1.434s
sys 0m0.000s
but it's kind of hacky.
NOTE! This probably depends a lot on microarchitecture details,
including very much branch predictor etc. And I didn't actually check
that it gives the right result, but I do think that this optimization
needs to be looked at more if we want to do it.
I was running this on an i7-4770S, fwiw.
Attached is the stupid test-program I used to do the above. Maybe I
did something wrong.
Linus
View attachment "int_sqrt.c" of type "text/x-csrc" (951 bytes)
Powered by blists - more mailing lists