lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140403174808.GA13674@bolet.org>
Date: Thu, 3 Apr 2014 19:48:08 +0200
From: Thomas Pornin <pornin@...et.org>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] babbling about trends

On Thu, Apr 03, 2014 at 07:05:58PM +0200, Krisztián Pintér wrote:
> 2a. memory
> 
> memory will no doubt be cheaper and cheaper. the question is whether
> we will use more RAM or will it level out at 16G or 64G or something.

There is an argument about gate density which states that the minimum
size for a working gate is about 10 nm or so. An IBM team built a 6 nm
gate once, but it was awfully "leaky"; reason being that when wires are
too close to each other, electrons jump from one to the other through
quantum tunneling. Assuming that you need a 10x10 nm area for each bit,
a 1x1 cm chip may contain 125 GB worth of data at most. So the limit
would be on the order of a few terabytes of RAM in a laptop.

However, one can imagine "layered" chips, in which tens or hundreds of 1
TB layers would be piled. Right now, producing multi-layered chips is
kind of baffling for foundries, because each stray dust speck can kill
your whole chip. However, this is an engineering problem which might
perhaps be solved in the future. Also, RAM produces less heat than a CPU
(extra heat is a big issue for multi-layered CPU, much less so for
storage).

RAM _latency_, on the other hand, does not drop fast. Comparing my home
computer from 20 years ago to the ones I now own, RAM size has been
multiplied by 2000 (4 MB then, 8 GB now), but latency has been divided
by only 5 or so. I expect latency to level out much sooner than total
RAM size.


> my prediction is: single core performance will not increase that much,
> but parallelism will explode.

That's the logical conclusion, but is has been made repeatedly with
seemingly very good arguments over the last 20 years, and we must admit
that parallelism has, for now, more slowly crept under the door than
exploded. In the 1990s, it was predicted that we would soon have
computers with thousands of cores; in 2014 we have four cores and a
few SIMD instructions (that's what I have on my laptop).

We _will_ get more parallelism in the future, but it will be slow to
come. Most of our programming tools, algorithms, and thinking patterns
are awfully sequential; and the "shared RAM" multithreading model does
not help either. Going parallel is like reinventing a whole Science. It
takes time.


> sooner or later we will switch to optical CPUs. i don't know a single
> thing about what can we expect from them.

There are opto-electronic devices, which combine optics and electronics;
they work, but they are slow. Then there are fully-optical processors,
which might be a tad faster than their electronic counterparts (mostly
because light beams can cross, so the information travel time between
two successive gates is reduced); but whether a full-optical CPU would
really be faster than a transistor-based CPU is open to debate. At
least, there is no theoretical potential for a huge boost (a
full-optical CPU will not be 100x faster than an electronic CPU).

As far as I know, fully-optical CPU are used in some military
applications because they are quite robust against EMP waves from
detonating nuclear warheads. Not exactly in the scope of PHC, though.


> But how things are change if I can just borrow the computational power
> for any time?

At that point you have to ask yourselves to what point you can trust
that kind of rented power. The cloud owner could be dishonest, or
maintain an imperfect isolation between customers. The cloud is THE
realistic model for all cache-based timing analyses. Through such leaks,
one VM could somehow spy on the secrets of other VM. If we go "cloud"
then we must think about these issues as well.

(Self-promotion: in that model, an algorithm which allows delegation of
work to untrusted third parties can be quite handy.)


	--Thomas Pornin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ