[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <yw1xmvv1dcdy.fsf@unicorn.mansr.com>
Date: Thu, 29 Oct 2015 13:37:13 +0000
From: Måns Rullgård <mans@...sr.com>
To: Alexey Brodkin <Alexey.Brodkin@...opsys.com>
Cc: "shemminger\@linux-foundation.org" <shemminger@...ux-foundation.org>,
"linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>,
"Vineet.Gupta1\@synopsys.com" <Vineet.Gupta1@...opsys.com>,
"linux-snps-arc\@lists.infradead.org"
<linux-snps-arc@...ts.infradead.org>,
"rmk+kernel\@arm.linux.org.uk" <rmk+kernel@....linux.org.uk>,
"davem\@davemloft.net" <davem@...emloft.net>,
"mingo\@elte.hu" <mingo@...e.hu>,
Nicolas Pitre <nicolas.pitre@...aro.org>
Subject: Re: [PATCH] __div64_32: implement division by multiplication for 32-bit arches
Alexey Brodkin <Alexey.Brodkin@...opsys.com> writes:
> Hi mans,
>
> On Thu, 2015-10-29 at 12:52 +0000, Måns Rullgård wrote:
>> Alexey Brodkin <Alexey.Brodkin@...opsys.com> writes:
>>
>> > Existing default implementation of __div64_32() for 32-bit arches unfolds
>> > into huge routine with tons of arithmetics like +, -, * and all of them
>> > in loops. That leads to obvious performance degradation if do_div() is
>> > frequently used.
>> >
>> > Good example is extensive TCP/IP traffic.
>> > That's what I'm getting with perf out of iperf3:
>> > -------------->8--------------
>> > 30.05% iperf3 [kernel.kallsyms] [k] copy_from_iter
>> > 11.77% iperf3 [kernel.kallsyms] [k] __div64_32
>> > 5.44% iperf3 [kernel.kallsyms] [k] memset
>> > 5.32% iperf3 [kernel.kallsyms] [k] stmmac_xmit
>> > 2.70% iperf3 [kernel.kallsyms] [k] skb_segment
>> > 2.56% iperf3 [kernel.kallsyms] [k] tcp_ack
>> > -------------->8--------------
>> >
>> > do_div() here is mostly used in skb_mstamp_get() to convert nanoseconds
>> > received from local_clock() to microseconds used in timestamp.
>> > BTW conversion itself is as simple as "/=1000".
>> >
>> > Fortunately we already have much better __div64_32() for 32-bit ARM.
>> > There in case of division by constant preprocessor calculates so-called
>> > "magic number" which is later used in multiplications instead of divisions.
>> > It's really nice and very optimal but obviously works only for ARM
>> > because ARM assembly is involved.
>> >
>> > Now why don't we extend the same approach to all other 32-bit arches
>> > with multiplication part implemented in pure C. With good compiler
>> > resulting assembly will be quite close to manually written assembly.
>> >
>> > And that change implements that.
>> >
>> > But there's at least 1 problem which I don't know how to solve.
>> > Preprocessor magic only happens if __div64_32() is inlined (that's
>> > obvious - preprocessor has to know if divider is constant or not).
>> >
>> > But __div64_32() is already marked as weak function (which in its turn
>> > is required to allow some architectures to provide its own optimal
>> > implementations). I.e. addition of "inline" for __div64_32() is not an
>> > option.
>> >
>> > So I do want to hear opinions on how to proceed with that patch.
>> > Indeed there's the simplest solution - use this implementation only in
>> > my architecture of preference (read ARC) but IMHO this change may
>> > benefit other architectures as well.
>>
>> I tried something similar for MIPS a while ago after noticing a similar
>> perf report. Adapting Nico's ARM code gave some nice speedups, but only
>> when I used MIPS assembly for the long multiplies. Apparently gcc is
>> still too stupid to do the sane thing.
>
> Could you please elaborate a little bit on what was a problem with gcc
> compared to hand-written asm?
In the final multiplications (the ones using ARM assembly), gcc has a
tendency to multiply things by zero and add the (zero) result to
something. This generally happens when multiplying a 64-bit value by a
32-bit one. The 32-bit value is simply converted to 64-bit by the usual
promotion rules, and gcc forgets that the upper half is know to be zero.
> The point is if preprocessor does proper constant propagation then compiler
> will need to implement only calculations marked "run-time calculations".
> And in its turn those are pretty straight-forward 32-bit + and *.
The constant calculation is fine. It's the final multiplication that's
the problem.
> And at least on ARC I saw with that change perf no longer captures
> __div64_32() during iperf and iperf results itself improved for about 10%.
> So I'd say advantage is quite noticeable.
There was an improvement without assembly as well, but with the MIPS
equivalent of the ARM assembly, it got much better.
--
Måns Rullgård
mans@...sr.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists