lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 8 Apr 2014 13:07:41 +0200
From: Dmitry Khovratovich <>
To: "" <>
Subject: Re: [PHC] Re: Proposed changes to author's code

Hi Bill,

thank you for the efforts!

The legal input ranges for Argon are
1 <= t_cost <= 2^24
1 <= m_cost <=2^24

We prefer t_cost = 3 for all m_cost starting from 128, and t_cost  = 256 -
2*m_cost for those smaller.

The total memory size in bytes is m_cost*1024.

I do not quite understand what you mean by average memory*time profile.
Argon always does ((6t_cost + 5)*m_cost*64) memory accesses and performs
f(t_cost,m_cost) operations, where f is a bilinear function (see Section
6.4 of the specification for details).

We do not recommend using the reference implementation for benchmarking
though. An optimized implementation will appear soon.

Best regards,

On Sun, Apr 6, 2014 at 9:31 PM, Bill Cox <> wrote:

> Also, I want to start recording some data for each entry that will
> help in automated testing.  In particular the legal input ranges for
> t_cost and m_cost.  It would also be great if the authors could let me
> know their prefered t_cost and m_cost settings for the different modes
> they support, for example L1 cache such as Blowfish, L1 or L2/L3 cache
> such as Catena, L1 or L2/L3 or main memory such as Yescript.
> Also, it would be great if authors of memory hard PHS's could let me
> know how to convert m_cost and t_cost into total hashed memory, total
> memory bandwidth for the memory targeted (for example L1 hashing may
> be higher bandwidth than external DRAM hashing).  If authors could
> also let me know what they think their average memory*time profile is,
> I can add that in comparison tables.  For example, my TwoCats fills
> memory linearly with time, and has an average memory*time that is
> about 1/2 of the peak.  Catena I think is something closer to 11/12ths
> of the peak.
> Feel free to suggest any automated tests, and any new measurements
> that should be done.  For testing, I have access to my own machine,
> which is biased towards my own TwoCats entry since that's where I
> tuned it, and probably also Alexander's two machines he gave me
> accounts on, one a Haswell processor, and the other a nice
> high-memory-bandwidth Sandybridge server.  I don't know where else to
> test.
> Thanks,
> Bill

Best regards,
Dmitry Khovratovich

Content of type "text/html" skipped

Powered by blists - more mailing lists