lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Sat, 19 Sep 2009 14:40:08 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Arjan van de Ven <arjan@...radead.org>
CC:	Jim Meyering <jim@...ering.net>, Theodore Tso <tytso@....edu>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: efficient access to "rotational";  new fcntl?

On 09/19/2009 02:30 PM, Arjan van de Ven wrote:
> On Sat, 19 Sep 2009 14:11:38 +0300
> Avi Kivity<avi@...hat.com>  wrote:
>
>    
>> On 09/19/2009 12:19 PM, Arjan van de Ven wrote:
>>      
>>>> However, sort *would* benefit, and some UCLA students implemented
>>>> that for a term project.  Unfortunately, the project is stalled
>>>> because the implementation was not efficient enough, and no one
>>>> has found the time to improve it since.
>>>>
>>>>          
>>> parallel sort... call me skeptical. My gut feeling is that you'll
>>> get killed by communication overhead.
>>> (sort seems to be more communication than raw cpu use)
>>>
>>>
>>>        
>> Why?  a sort that fits in memory is purely cpu and memory access.
>>      
> memory access == communication due to cache line bounces....
>    


For a large sort cache line bounces will be negligible.  First, memory 
size greatly exceeds cache size.  Second, every cach eline will be 
accessed multiple times.

If you're sorting 2MB, then yes, threads aren't needed.

>> Instead of O(N log N) you'd get K * O(N/K log N/K) followed by an
>> O(N) merge.  For large N and small K, you get a speedup of roughly K
>> (since the O(N) merge is dominated by the preceding sort.
>>      
> doing big-O arithmatic and then add constant divisions/multipliers is
> rather.. dangerous ;)
>    

I'm wearing my seat belt.

> You actually get K * C1 * O(N/K log N/K) + C2 * O(N)
> where C1/C2 is the actual cost of the extra intra-cpu communication.
> (and for most sorting, I suspect the communication costs dominate over
> CPU cost)
>
> I'd be happy to be proven wrong...
>    

Again, if the dataset is large, only a small fraction of the cache lines 
will be on another cpu.  The majority will be in main memory.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ