lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 14 Nov 2014 16:13:42 +0100
From:	Hannes Frederic Sowa <hannes@...essinduktion.org>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	netdev@...r.kernel.org, ogerlitz@...lanox.com, pshelar@...ira.com,
	jesse@...ira.com, jay.vosburgh@...onical.com,
	discuss@...nvswitch.org
Subject: Re: [PATCH net-next] fast_hash: clobber registers correctly for
 inline function use

On Fr, 2014-11-14 at 06:50 -0800, Eric Dumazet wrote:
> On Fri, 2014-11-14 at 15:06 +0100, Hannes Frederic Sowa wrote:
> > In case the arch_fast_hash call gets inlined we need to tell gcc which
> > registers are clobbered with. Most callers where fine, as rhashtable
> > used arch_fast_hash via function pointer and thus the compiler took care
> > of that. In case of openvswitch the call got inlined and arch_fast_hash
> > touched registeres which gcc didn't know about.
> > 
> > Also don't use conditional compilation inside arguments, as this confuses
> > sparse.
> > 
> 
> Please add a 
> Fixes: 12-sha1 ("patch title")

I forgot, will send new version with tag added.

> 
> > Reported-by: Jay Vosburgh <jay.vosburgh@...onical.com>
> > Cc: Pravin Shelar <pshelar@...ira.com>
> > Cc: Jesse Gross <jesse@...ira.com>
> > Signed-off-by: Hannes Frederic Sowa <hannes@...essinduktion.org>
> > ---
> >  arch/x86/include/asm/hash.h | 18 ++++++++++++------
> >  1 file changed, 12 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/hash.h b/arch/x86/include/asm/hash.h
> > index a881d78..771cee0 100644
> > --- a/arch/x86/include/asm/hash.h
> > +++ b/arch/x86/include/asm/hash.h
> > @@ -23,11 +23,14 @@ static inline u32 arch_fast_hash(const void *data, u32 len, u32 seed)
> >  {
> >  	u32 hash;
> >  
> > -	alternative_call(__jhash, __intel_crc4_2_hash, X86_FEATURE_XMM4_2,
> >  #ifdef CONFIG_X86_64
> > -			 "=a" (hash), "D" (data), "S" (len), "d" (seed));
> > +	alternative_call(__jhash, __intel_crc4_2_hash, X86_FEATURE_XMM4_2,
> > +			 "=a" (hash), "D" (data), "S" (len), "d" (seed)
> > +			 : "rcx", "r8", "r9", "r10", "r11", "cc", "memory");
> >  #else
> 
> 
> 
> 
> > -			 "=a" (hash), "a" (data), "d" (len), "c" (seed));
> > +	alternative_call(__jhash, __intel_crc4_2_hash, X86_FEATURE_XMM4_2,
> > +			 "=a" (hash), "a" (data), "d" (len), "c" (seed)
> > +			 : "cc", "memory");
> >  #endif
> >  	return hash;
> >  }
> > @@ -36,11 +39,14 @@ static inline u32 arch_fast_hash2(const u32 *data, u32 len, u32 seed)
> >  {
> >  	u32 hash;
> >  
> > -	alternative_call(__jhash2, __intel_crc4_2_hash2, X86_FEATURE_XMM4_2,
> >  #ifdef CONFIG_X86_64
> > -			 "=a" (hash), "D" (data), "S" (len), "d" (seed));
> > +	alternative_call(__jhash2, __intel_crc4_2_hash2, X86_FEATURE_XMM4_2,
> > +			 "=a" (hash), "D" (data), "S" (len), "d" (seed)
> > +			 : "rcx", "r8", "r9", "r10", "r11", "cc", "memory");
> 
> 
> Thats a lot of clobbers.

Yes, those are basically all callee-clobbered registers for the
particular architecture. I didn't look at the generated code for jhash
and crc_hash because I want this code to always be safe, independent of
the version and optimization levels of gcc.

> Alternative would be to use an assembly trampoline to save/restore them
> before calling __jhash2

This version provides the best hints on how to allocate registers to the
optimizers. E.g. it could avoid using callee-clobbered registers but use
callee-saved ones. If we build a trampoline, we need to save and reload
all registers all the time. This version just lets gcc decide how to do
that.

> __intel_crc4_2_hash2 can probably be written in assembly, it is quite
> simple.

Sure, but all the pre and postconditions must hold for both, jhash and
intel_crc4_2_hash and I don't want to rewrite jhash in assembler.

Thanks,
Hannes


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ