lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOLP8p7PBT-4MvOabe5ifWm2S4ATDRtVLNZ9_QeH=ERxQqpaRA@mail.gmail.com>
Date: Fri, 28 Feb 2014 08:32:56 -0500
From: Bill Cox <waywardgeek@...il.com>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] Future CPUs and GPUs?

On Fri, Feb 28, 2014 at 3:26 AM, Jeremi Gosney <epixoip@...dshell.nl> wrote:
> On 2/27/2014 11:37 PM, Larry Bugbee wrote:
>> All crystal balls are fuzzy at best, but in five years, what will be the likely GPU configurations and specifications at the knee of the cost/performance curve?  ...ten years? Adjustable parameters will help, but how will proposed algorithms faire against future GPUs?
>
> This is just my opinion, but I view GPGPU as a hack, and I predict that
> in 5-10 years time, GPUs will be largely irrelevant for compute. I think
> the future lies in manycore CPUs -- dense processors whose primary
> purpose is to function as a computational accelerator, and not to push
> pixels.
>
> The first generation of manycore accelerators are already on the market.
> And while they have yet to really impress the GPGPU crowd, I don't think
> it we will have to wait long before manycore CPUs begin to surpass GPUs
> in terms of general compute performance.
>
> I think that in five years' time we'll easily see manycore CPUs with as
> many as 4k cores (I believe Adapteva already has the capabilities to
> produce this chip), and in 10 years time I'm sure we will see CPUs with
> 8k cores, or maybe even more.
>
> Again, just my opinion, but I really can't see it going any other way.
> Especially with GPU manufacturers already reaching a performance
> plateau. AMD (the password cracker's vendor of choice) failed to deliver
> on GCN 2.0, and instead rolled out a new product line based on
> last-generation technology. They were able to squeak out a new flagship
> GPU which is faster than their previous flagship GPU, but this is
> because they simply increased the area of the processor and added more
> cores. There are of course major limitations to this approach, so unless
> real innovation happens in that space, it surely will not be long before
> manycore CPUs overtake GPUs.

I think a lot of what we're already seeing is due to soft soft end to
Moore's Law, at least for planar silicon.  I remember laughing at
people who decided to keep working in 2u CMOS when 1u was solid and
cheap.  I thought 1.5u 3-metal NMOS at HP in 1990 was a riot -
everyone had switched to CMOS at least 18 months earlier.  Being on
the cutting edge took only a few thousand dollars per design, and the
fabs were affordable by medium sized companies.  Now I'm still working
with 350nm silicon while Intel is shipping 22nm (150X denser!), and
the rest of the world wishes they could afford 28nm.

Anyway, even Intel is coming to the end of Moore's Law.  First with
speed.  The Sandy Bridge machine Solar Designer lent me for benchmarks
is easily faster for most applications than his Haswell machine.  The
older one seems to be a high end server while the new one seems to
have some cheap memory and a fast CPU, but Sandy Bridge is 2
generations old!

So how much further can Intel go?  My physics geek friend has told me
for decades that the answer is 10nm wires.  That's when electrons stop
ignoring each other and start lining up single file.  It it possible
to shrink below 10nm process node?  Beats me, but we're seeing
failures in next-gen tech - big failures, like no EUV, and no e-beam
lithography.  IBM wants out now, leaving Intel as the last
cutting-edge US fab standing.  I think I see some signs that TSMC is
having trouble shrinking, even with Intel blazing the path.  SRAM
prices have also been holding for a while...

Assuming Intel has only 15nm, and then maybe 1 more node, 11nm, there
is only another 4X left in die shrink.  So, take our 8 core CPUs, and
maybe make them 64 cores.  Another trend is that we're shipping more
cores already than people need or want.  My Moto-X Republic Wireless
phone (the best phone, price, and provider in the world, IMO) with
it's dual-core last-gen tech ARM processor is easily a match for the
latest quad-core newest-gen Samsung phone.  Spec-mongers want more
cores, but why?

Another trend, because we already have *too much* compute power, is no
one cares about the kind of thing I love to do - make code go really
fast.  How many companies need speed freak C/C++ coders, compared to
Ruby interpreted code hackers?

So, my future conjecture is that it's not going to be so much
different than what we seen now, at least not as soon as we expect.
Maybe 10nm will become mainstream, but that's a small shrink.
Processor and GPU cores may go up 8X or 16X, which is still amazing,
but not like it's been.

Now for password crackers... there will be a winner of the PHC.  Most
likely, Solar Designer's schemes will either be the winner directly,
or heavily influence the winner, meaning current GPU attacks are
toast.  Maybe the next round of GPUs will be rearchitected to better
attack the PHC winner as a result.  Certainly BitCoin and LiteCoin
have heavily impacted AMD's bottom line, so surely they are taking
notice.

So long as we also compute time harden our KDFs, FPGA attacks will be
toast for memory sizes that bust out of the FPGA because the memory
required will make it cheaper to use cheap CPUs instead.  ASIC attacks
will become impractical for anyone other than government sized
organizations.

Going forward, the PHC winner should in theory make password databases
reasonably secure compared to the mess we have now, regardless of GPU
architectures, so long as we take ASICs into account.

One more prediction... somewhere there's a guy like me who thinks
about configurable logic architectures in their sleep, and who really
does know how to build near optimal highly configurable chips for
attacking ARX based memory-hard KDFs as cheaply as possible.  That
guy, unlike me, may know how to keep a secret, and he'll have a very
cool job working for the NSA, where he will be bewildered at the
government stupidity as they take his ultra-cheap solution and somehow
make it's cost spin out of control.

Bill

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ