lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Date:	Fri, 18 Oct 2013 11:46:08 -0400
From:	Doug Ledford <dledford@...hat.com>
To:	Joe Perches <joe@...ches.com>
Cc:	Ingo Molnar <mingo@...nel.org>,
	Eric Dumazet <eric.dumazet@...il.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86: Run checksumming in parallel accross multiple alu's

On Mon, 2013-10-14 at 22:49 -0700, Joe Perches wrote:
> On Mon, 2013-10-14 at 15:44 -0700, Eric Dumazet wrote:
>> On Mon, 2013-10-14 at 15:37 -0700, Joe Perches wrote:
>> > On Mon, 2013-10-14 at 15:18 -0700, Eric Dumazet wrote:
>> > > attached patch brings much better results
>> > > 
>> > > lpq83:~# ./netperf -H 7.7.8.84 -l 10 -Cc
>> > > MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.8.84 () port 0 AF_INET
>> > > Recv   Send    Send                          Utilization       Service Demand
>> > > Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
>> > > Size   Size    Size     Time     Throughput  local    remote   local   remote
>> > > bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB
>> > > 
>> > >  87380  16384  16384    10.00      8043.82   2.32     5.34     0.566   1.304  
>> > > 
>> > > diff --git a/arch/x86/lib/csum-partial_64.c b/arch/x86/lib/csum-partial_64.c
>> > []
>> > > @@ -68,7 +68,8 @@ static unsigned do_csum(const unsigned char *buff, unsigned len)
>> > >  			zero = 0;
>> > >  			count64 = count >> 3;
>> > >  			while (count64) { 
>> > > -				asm("addq 0*8(%[src]),%[res]\n\t"
>> > > +				asm("prefetch 5*64(%[src])\n\t"
>> > 
>> > Might the prefetch size be too big here?
>> 
>> To be effective, you need to prefetch well ahead of time.
> 
> No doubt.
> 
>> 5*64 seems common practice (check arch/x86/lib/copy_page_64.S)
> 
> 5 cachelines for some processors seems like a lot.
> 
> Given you've got a test rig, maybe you could experiment
> with 2 and increase it until it doesn't get better.

You have a fundamental misunderstanding of the prefetch operation.  The 5*64
in the above asm statment does not mean a size, it is an index, with %[src]
as the base pointer.  So it is saying to go to address %[src] + 5*64 and
prefetch there.  The prefetch size itself is always a cache line.  Once the
address is known, whatever cacheline holds that address is the cacheline we
will prefetch.  Your size concerns have no meaning.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ