lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 08 Nov 2013 12:29:07 -0800
From:	Joe Perches <joe@...ches.com>
To:	Neil Horman <nhorman@...driver.com>
Cc:	Dave Jones <davej@...hat.com>, linux-kernel@...r.kernel.org,
	sebastien.dugue@...l.net, Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org
Subject: Re: [PATCH v2 2/2] x86: add prefetching to do_csum

On Fri, 2013-11-08 at 15:14 -0500, Neil Horman wrote:
> On Fri, Nov 08, 2013 at 11:33:13AM -0800, Joe Perches wrote:
> > On Fri, 2013-11-08 at 14:01 -0500, Neil Horman wrote:
> > > On Wed, Nov 06, 2013 at 09:19:23AM -0800, Joe Perches wrote:
> > > > On Wed, 2013-11-06 at 10:54 -0500, Neil Horman wrote:
> > > > > On Wed, Nov 06, 2013 at 10:34:29AM -0500, Dave Jones wrote:
> > > > > > On Wed, Nov 06, 2013 at 10:23:19AM -0500, Neil Horman wrote:
> > > > > >  > do_csum was identified via perf recently as a hot spot when doing
> > > > > >  > receive on ip over infiniband workloads.  After alot of testing and
> > > > > >  > ideas, we found the best optimization available to us currently is to
> > > > > >  > prefetch the entire data buffer prior to doing the checksum
> > > > []
> > > > > I'll fix this up and send a v3, but I'll give it a day in case there are more
> > > > > comments first.
> > > > 
> > > > Perhaps a reduction in prefetch loop count helps.
> > > > 
> > > > Was capping the amount prefetched and letting the
> > > > hardware prefetch also tested?
> > > > 
> > > > 	prefetch_lines(buff, min(len, cache_line_size() * 8u));
> > > > 
> > > 
> > > Just tested this out:
> > 
> > Thanks.
> > 
> > Reformatting the table so it's a bit more
> > readable/comparable for me:
> > 
> > len	SetSz	Loops	cycles/byte
> > 			limited	unlimited
> > 1500B	64MB	1M	1.3442	1.3605
> > 1500B	128MB	1M	1.3410	1.3542
> > 1500B	256MB	1M	1.3536	1.3710
> > 1500B	512MB	1M	1.3463	1.3536
> > 9000B	64MB	1M	0.8522	0.8504
> > 9000B	128MB	1M	0.8528	0.8536
> > 9000B	256MB	1M	0.8532	0.8520
> > 9000B	512MB	1M	0.8527	0.8525
> > 64KB	64MB	1M	0.7686	0.7683
> > 64KB	128MB	1M	0.7695	0.7686
> > 64KB	256MB	1M	0.7699	0.7708
> > 64KB	512MB	1M	0.7799	0.7694
> > 
> > This data appears to show some value
> > in capping for 1500b lengths and noise
> > for shorter and longer lengths.
> > 
> > Any idea what the actual distribution of
> > do_csum lengths is under various loads?
> > 
> I don't have any hard data no, sorry.

I think you should before you implement this.
You might find extremely short lengths.

> I'll cap the prefetch at 1500B for now, since it
> doesn't seem to hurt or help beyond that

The table data has a max prefetch of
8 * boot_cpu_data.x86_cache_alignment so
I believe it's always less than 1500 but
perhaps 4 might be slightly better still.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ