[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOLP8p5Sc0EsxjmXLo8SpjAU2-09A_Z+_W22AKUWWeEc4jVgLQ@mail.gmail.com>
Date: Sun, 6 Apr 2014 15:31:59 -0400
From: Bill Cox <waywardgeek@...il.com>
To: discussions@...sword-hashing.net
Subject: Re: Proposed changes to author's code
Also, I want to start recording some data for each entry that will
help in automated testing. In particular the legal input ranges for
t_cost and m_cost. It would also be great if the authors could let me
know their prefered t_cost and m_cost settings for the different modes
they support, for example L1 cache such as Blowfish, L1 or L2/L3 cache
such as Catena, L1 or L2/L3 or main memory such as Yescript.
Also, it would be great if authors of memory hard PHS's could let me
know how to convert m_cost and t_cost into total hashed memory, total
memory bandwidth for the memory targeted (for example L1 hashing may
be higher bandwidth than external DRAM hashing). If authors could
also let me know what they think their average memory*time profile is,
I can add that in comparison tables. For example, my TwoCats fills
memory linearly with time, and has an average memory*time that is
about 1/2 of the peak. Catena I think is something closer to 11/12ths
of the peak.
Feel free to suggest any automated tests, and any new measurements
that should be done. For testing, I have access to my own machine,
which is biased towards my own TwoCats entry since that's where I
tuned it, and probably also Alexander's two machines he gave me
accounts on, one a Haswell processor, and the other a nice
high-memory-bandwidth Sandybridge server. I don't know where else to
test.
Thanks,
Bill
Powered by blists - more mailing lists