lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 29 Jul 2015 00:38:45 +0300
From:	Yury <yury.norov@...il.com>
To:	Cassidy Burden <cburden@...eaurora.org>
CC:	akpm@...ux-foundation.org, linux-arm-msm@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
	Alexey Klimov <klimov.linux@...il.com>,
	"David S. Miller" <davem@...emloft.net>,
	Daniel Borkmann <dborkman@...hat.com>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Mark Salter <msalter@...hat.com>,
	AKASHI Takahiro <takahiro.akashi@...aro.org>,
	Thomas Graf <tgraf@...g.ch>,
	Valentin Rothberg <valentinrothberg@...il.com>,
	Chris Wilson <chris@...is-wilson.co.uk>,
	Rasmus Villemoes <linux@...musvillemoes.dk>, linux@...izon.com
Subject: Re: [PATCH] lib: Make _find_next_bit helper function inline

On 29.07.2015 00:23, Yury wrote:
> On 28.07.2015 22:09, Cassidy Burden wrote:
>> I've tested Yury Norov's find_bit reimplementation with the 
>> test_find_bit
>> module (https://lkml.org/lkml/2015/3/8/141) and measured about 35-40%
>> performance degradation on arm64 3.18 run with fixed CPU frequency.
>>
>> The performance degradation appears to be caused by the
>> helper function _find_next_bit. After inlining this function into
>> find_next_bit and find_next_zero_bit I get slightly better performance
>> than the old implementation:
>>
>> find_next_zero_bit          find_next_bit
>> old      new     inline     old      new     inline
>> 26       36      24         24       33      23
>> 25       36      24         24       33      23
>> 26       36      24         24       33      23
>> 25       36      24         24       33      23
>> 25       36      24         24       33      23
>> 25       37      24         24       33      23
>> 25       37      24         24       33      23
>> 25       37      24         24       33      23
>> 25       36      24         24       33      23
>> 25       37      24         24       33      23
>>
>> Signed-off-by: Cassidy Burden <cburden@...eaurora.org>
>> Cc: Alexey Klimov <klimov.linux@...il.com>
>> Cc: David S. Miller <davem@...emloft.net>
>> Cc: Daniel Borkmann <dborkman@...hat.com>
>> Cc: Hannes Frederic Sowa <hannes@...essinduktion.org>
>> Cc: Lai Jiangshan <laijs@...fujitsu.com>
>> Cc: Mark Salter <msalter@...hat.com>
>> Cc: AKASHI Takahiro <takahiro.akashi@...aro.org>
>> Cc: Thomas Graf <tgraf@...g.ch>
>> Cc: Valentin Rothberg <valentinrothberg@...il.com>
>> Cc: Chris Wilson <chris@...is-wilson.co.uk>
>> ---
>>   lib/find_bit.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/lib/find_bit.c b/lib/find_bit.c
>> index 18072ea..d0e04f9 100644
>> --- a/lib/find_bit.c
>> +++ b/lib/find_bit.c
>> @@ -28,7 +28,7 @@
>>    * find_next_zero_bit.  The difference is the "invert" argument, which
>>    * is XORed with each fetched word before searching it for one bits.
>>    */
>> -static unsigned long _find_next_bit(const unsigned long *addr,
>> +static inline unsigned long _find_next_bit(const unsigned long *addr,
>>           unsigned long nbits, unsigned long start, unsigned long 
>> invert)
>>   {
>>       unsigned long tmp;
>
> Hi Cassidi,
>
> At first, I'm really surprised that there's no assembler implementation
> of find_bit routines for aarch64. Aarch32 has ones...
>
> I was thinking on inlining the helper, but decided not to do this....
>
> 1. Test is not too realistic. https://lkml.org/lkml/2015/2/1/224
> The typical usage pattern is to look for a single bit or range of bits.
> So in practice nobody calls find_next_bit thousand times.
>
> 2. Way more important to fit functions into as less cache lines as
> possible. https://lkml.org/lkml/2015/2/12/114
> In this case, inlining increases cache lines consumption almost twice...
>
> 3. Inlining prevents compiler from some other possible optimizations. 
> It's
> probable that in real module compiler will inline callers of 
> _find_next_bit,
> and final output will be better. I don't like to point out the 
> compiler how
> it should do its work.
>
> Nevertheless, if this is your real case, and inlining helps, I'm OK 
> with it.
>
> But I think, before/after for x86 is needed as well.
> And why don't you consider '__always_inline__'? Simple inline is only 
> a hint and
> guarantees nothing.

(Sorry for typo in your name. Call me Yuri next time.)

Adding Rasmus and George to CC

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ