lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110814124002.GA1528@liondog.tnic>
Date:	Sun, 14 Aug 2011 14:40:02 +0200
From:	Borislav Petkov <bp@...en8.de>
To:	Denys Vlasenko <vda.linux@...glemail.com>
Cc:	Ingo Molnar <mingo@...e.hu>, melwyn lobo <linux.melwyn@...il.com>,
	linux-kernel@...r.kernel.org, "H. Peter Anvin" <hpa@...or.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	borislav.petkov@....com
Subject: Re: x86 memcpy performance

On Sun, Aug 14, 2011 at 01:13:56PM +0200, Denys Vlasenko wrote:
> On Sunday 14 August 2011 11:59, Borislav Petkov wrote:
> > Here's the SSE memcpy version I got so far, I haven't wired in the
> > proper CPU feature detection yet because we want to run more benchmarks
> > like netperf and stuff to see whether we see any positive results there.
> > 
> > The SYSTEM_RUNNING check is to take care of early boot situations where
> > we can't handle FPU exceptions but we use memcpy. There's an aligned and
> > misaligned variant which should handle any buffers and sizes although
> > I've set the SSE memcpy threshold at 512 Bytes buffersize the least to
> > cover context save/restore somewhat.
> > 
> > Comments are much appreciated! :-)
> > 
> > --- a/arch/x86/include/asm/string_64.h
> > +++ b/arch/x86/include/asm/string_64.h
> > @@ -28,10 +28,20 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
> >  
> >  #define __HAVE_ARCH_MEMCPY 1
> >  #ifndef CONFIG_KMEMCHECK
> > +extern void *__memcpy(void *to, const void *from, size_t len);
> > +extern void *__sse_memcpy(void *to, const void *from, size_t len);
> >  #if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4
> > -extern void *memcpy(void *to, const void *from, size_t len);
> > +#define memcpy(dst, src, len)					\
> > +({								\
> > +	size_t __len = (len);					\
> > +	void *__ret;						\
> > +	if (__len >= 512)					\
> > +		__ret = __sse_memcpy((dst), (src), __len);	\
> > +	else							\
> > +		__ret = __memcpy((dst), (src), __len);		\
> > +	__ret;							\
> > +})
> 
> Please, no. Do not inline every memcpy invocation.
> This is pure bloat (comsidering how many memcpy calls there are)
> and it doesn't even win anything in speed, since there will be
> a fucntion call either way.
> Put the __len >= 512 check inside your memcpy instead.

In the __len < 512 case, this would actually cause two function calls,
actually: once the __sse_memcpy and then the __memcpy one.

> You may do the check if you know that __len is constant:
> if (__builtin_constant_p(__len) && __len >= 512) ...
> because in this case gcc will evaluate it at compile-time.

That could justify the bloat at least partially.

Actually, I had a version which sticks sse_memcpy code into memcpy_64.S
and that would save us both the function call and the bloat. I might
return to that one if it turns out that SSE memcpy makes sense for the
kernel.

Thanks.

-- 
Regards/Gruss,
    Boris.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ