lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1415976656.17262.41.camel@edumazet-glaptop2.roam.corp.google.com>
Date:	Fri, 14 Nov 2014 06:50:56 -0800
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Hannes Frederic Sowa <hannes@...essinduktion.org>
Cc:	netdev@...r.kernel.org, ogerlitz@...lanox.com, pshelar@...ira.com,
	jesse@...ira.com, jay.vosburgh@...onical.com,
	discuss@...nvswitch.org
Subject: Re: [PATCH net-next] fast_hash: clobber registers correctly for
 inline function use

On Fri, 2014-11-14 at 15:06 +0100, Hannes Frederic Sowa wrote:
> In case the arch_fast_hash call gets inlined we need to tell gcc which
> registers are clobbered with. Most callers where fine, as rhashtable
> used arch_fast_hash via function pointer and thus the compiler took care
> of that. In case of openvswitch the call got inlined and arch_fast_hash
> touched registeres which gcc didn't know about.
> 
> Also don't use conditional compilation inside arguments, as this confuses
> sparse.
> 

Please add a 
Fixes: 12-sha1 ("patch title")

> Reported-by: Jay Vosburgh <jay.vosburgh@...onical.com>
> Cc: Pravin Shelar <pshelar@...ira.com>
> Cc: Jesse Gross <jesse@...ira.com>
> Signed-off-by: Hannes Frederic Sowa <hannes@...essinduktion.org>
> ---
>  arch/x86/include/asm/hash.h | 18 ++++++++++++------
>  1 file changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/include/asm/hash.h b/arch/x86/include/asm/hash.h
> index a881d78..771cee0 100644
> --- a/arch/x86/include/asm/hash.h
> +++ b/arch/x86/include/asm/hash.h
> @@ -23,11 +23,14 @@ static inline u32 arch_fast_hash(const void *data, u32 len, u32 seed)
>  {
>  	u32 hash;
>  
> -	alternative_call(__jhash, __intel_crc4_2_hash, X86_FEATURE_XMM4_2,
>  #ifdef CONFIG_X86_64
> -			 "=a" (hash), "D" (data), "S" (len), "d" (seed));
> +	alternative_call(__jhash, __intel_crc4_2_hash, X86_FEATURE_XMM4_2,
> +			 "=a" (hash), "D" (data), "S" (len), "d" (seed)
> +			 : "rcx", "r8", "r9", "r10", "r11", "cc", "memory");
>  #else




> -			 "=a" (hash), "a" (data), "d" (len), "c" (seed));
> +	alternative_call(__jhash, __intel_crc4_2_hash, X86_FEATURE_XMM4_2,
> +			 "=a" (hash), "a" (data), "d" (len), "c" (seed)
> +			 : "cc", "memory");
>  #endif
>  	return hash;
>  }
> @@ -36,11 +39,14 @@ static inline u32 arch_fast_hash2(const u32 *data, u32 len, u32 seed)
>  {
>  	u32 hash;
>  
> -	alternative_call(__jhash2, __intel_crc4_2_hash2, X86_FEATURE_XMM4_2,
>  #ifdef CONFIG_X86_64
> -			 "=a" (hash), "D" (data), "S" (len), "d" (seed));
> +	alternative_call(__jhash2, __intel_crc4_2_hash2, X86_FEATURE_XMM4_2,
> +			 "=a" (hash), "D" (data), "S" (len), "d" (seed)
> +			 : "rcx", "r8", "r9", "r10", "r11", "cc", "memory");


Thats a lot of clobbers.

Alternative would be to use an assembly trampoline to save/restore them
before calling __jhash2

__intel_crc4_2_hash2 can probably be written in assembly, it is quite
simple.



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ