lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 13 Mar 2014 20:59:38 -0400
From: Bill Cox <>
Subject: Re: [PHC] "Why I Don't Recommend Scrypt"

On Thu, Mar 13, 2014 at 6:47 PM, Peter Maxwell <> wrote:
> On 13 March 2014 22:20, Bill Cox <> wrote:
>> On Thu, Mar 13, 2014 at 5:58 PM, Peter Maxwell <>
>> wrote:
> The comparative number of deployed instances requiring "military strength"
> security - however that is defined - is likely to be very small compared to
> run-of-the-mill deployments.  My personal feelings are that the first
> priority is trying to optimize for the most common uses and assume the
> parameters can be ramped-up for the paranoid.

That works for me.

> I agree that massively scalar CPUs are on the horizon (fairly sure I'd read
> an article a few years back about Intel doing this?).  It does somewhat
> throw a spanner in the works in terms of current assumptions though: if
> general purpose CPUs end up having thousands of cores, it makes more sense
> to hammer compute tasks rather than memory.
> I thought bcrypt was secure(ish) against GPUs but not ASICs?

I could be wrong, because I've never looked into the Bcrypt algorithm,
but I hear it hashes 4KiB of memory in a randomized short loop, over
and over.  Assuming that's what it does, it sounds like integrating
1,000 of them on an ASIC running at the same speed as your CPU should
be doable.  4 KiB of memory is pretty small in the latest processes.

>> Given a chance to develop a new PHS, I would hope solutions will be
>> found that defend against ASICs, GPUs, and FPGAs to varying extents,
>> without giving up on any of them.
> In an ideal world, yes.  However, I'm getting the distinct impression that
> to defend against ASICs imposes utterly unrealistic resource requirements on
> the defender, to the extent that it would hinder adoption of the PHS.  Is it
> not worth specifying default parameters at a more realistic level and
> explicitly stating the risks?  (with the option for higher security by
> increasing parameters in cases when it's required)

I have and ASIC/FPGA bias because I've worked with them for years, but
I was feeling the opposite.  By defending against ASICs, we help the
defender choose resources that defend not just against ASICs, but
against all hardware attacks.  When we consider ASIC attack
resistance, we ask what are the fundamental metrics in silicon that
even the most well funded adversary can not circumvent?  By defending
against them, we focus on the most basic metrics that every attacker
will face.

The metrics I see so far aren't so different than defending against
FPGA attacks, though I have to admit that I find modern GPU
architectures an alien conspiracy designed to destroy civilization!


Powered by blists - more mailing lists