[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAS2fgTiUjUULx5DYPJhA4Jwbsc8egVYOSibexH2eTnMf-0OBw@mail.gmail.com>
Date: Wed, 1 Apr 2015 09:18:58 +0000
From: Gregory Maxwell <gmaxwell@...il.com>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] OMG we have benchmarks
On Wed, Apr 1, 2015 at 9:14 AM, Krisztián Pintér <pinterkr@...il.com> wrote:
> On Wed, Apr 1, 2015 at 11:01 AM, Milan Broz <gmazyland@...il.com> wrote:
>> - the low memory setting is "unstable" because of RUSAGE measurement:
>> Real memory us is simple difference of getrusage(RUSAGE_SELF, ...)
>> before and after run (well, here maximum of three runs).
>
> just a quick question: wouldn't it be easier and more precise to
> calculate the memory requirement instead of measuring it? it should be
> quite straightforward from the algorithms.
And would also avoid making mistakes like crediting an algorithm for
more than it actually needs due to malloc overhead or the like. It's
not bad to look at the rusage data since it should always be higher
than the actual usage, and might catch some mistake... but it
shouldn't be the definitive figure as an attacker (or mature
implementation) is going to happily optimize out whatever excess
overheads you're accidentally counting with it.
Powered by blists - more mailing lists