lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 30 Sep 2015 22:02:23 +0200
From: Dmitry Khovratovich <khovratovich@...il.com>
To: "discussions@...sword-hashing.net" <discussions@...sword-hashing.net>
Subject: Re: [PHC] Re: Asymmetric proof-of-work based on the Generalized
 Birthday problem

Dear John,

I could not obtain explicit statements of the time and memory requirements
of your scheme from the paper. Some claims in our paper possibly came from
some misunderstanding.

 Andersen's analysis stated that what previously can be done in time T and
memory M can be done now with memory M/50 and time ~T. This implies a new
time-space tradeoff in the form of Figure 4 in our paper. You must have it
in some form to claim any positive statements about amortization and
tradeoff resilience. Currently it is unclear from the paper how much
Andersen's analysis affected the scheme properties. As a result, I had to
rely on his analysis first and parallelizability he explains.

If you could make more rigorous claims about behaviour of your scheme under
certain memory reductions and increases, this would help me to reference
your work properly.

Dmitry

On Wed, Sep 30, 2015 at 3:40 PM, John Tromp <john.tromp@...il.com> wrote:

> dear Dmitry,
>
> > Comments are welcome.--
>
> Your statements on Cuckoo Cycle appear to be based
> on an obsolete version of the paper.
>
> Dramatic optimization has been impossible for over a year,
> ever since I incorporated Andersen's edge-trimming into the
> reference implementation in May 2014.
>
> Cuckoo Cycle is amortization free, as you need multiple graphs
> to find multiple 42 cycles (chances of a single graph having
> multiple 42-cycles are exceedingly small).
>
> Why do you qualify Cuckoo Cycle as parallellizable, rather than
> parallelism constrained as it is by RAM bandwidth?
>
> Btw, Cuckoo Cycle graphs are undirected, contrary to your description.
>
> There is good evidence of edge trimming being optimal,
> as it uses just 1 bit per edge (as well as the most trivial of code),
> and there is a quadratic lower bound on the
> product of time and space for graph traversal
> (a different but closely related problem).
>
> In comparison, the algorithm in the paper is quite a bit more complicated,
> with tables of hashes and pairs of indices.
>
> Could I please get a copy of your source code?
>
> regards,
> -John
>



-- 
Best regards,
Dmitry Khovratovich

Content of type "text/html" skipped

Powered by blists - more mailing lists