lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 9 Mar 2015 21:24:01 +0000
From: Marsh Ray <maray@...rosoft.com>
To: "discussions@...sword-hashing.net" <discussions@...sword-hashing.net>
Subject: RE: [PHC] PHC output specifics

-----Original Message-----
From: Peter Gutmann [mailto:pgut001@...auckland.ac.nz] 
>
> I don't even know if a SHOULD will make much difference.  The 
> people implementing the crypto are highly unlikely to be the 
> ones providing the passwords to the API,

It's easy to be cynical in our industry, but I promise you there
are lots of people who read RFCs, specifications, and other
technical documents and implement them accurately. Many times on
several distinct projects in my career I have been one of them.

> so from the crypto
> -implementer point of view a password is a { void *password
> , int length } combination, and from the user of the password
> -processing function it's whatever they want it to be (ASCII,
> UTF-8, Unicode, etc).

Users of C APIs (outside of US-AU-NZ) are familiar with character
set encoding issues and could be receptive to SHOULD-level guidance.
But most users will probably be using this function from higher
level scripting languages. These languages definitely have a
relationship with string encodings as they are so important for
the web.

> Consider for example Windows (CryptoAPI -> stunnel -> web

Schannel :-) (that was a blast from the past)

> browser) or Android (OpenSSL? -> Dalvik -> app developers), 
> in both cases the consumers of the functionality are two levels 
> away from the ones implementing the password-processing function.
> Adding a note alerting users at the password end of the 
> chain to the issue is a good idea, but trying to tell developers 
> at the low-level crypto API end of the chain what to do when 
> they themselves have little to no control over what's being 
> passed to them probably isn't useful.

Perhaps there will be some low-level crypto lib developers who are
not in a position to implement this SHOULD recommendation. But they
can still pass on the recommendation in their own API documentation.

This API is somewhat unique in crypto in that it deals specifically
with human-entered and human-readable text. This is not unprecedented,
think about the challenges posed by certificate validation of
international domain names.

> Another consideration is, how much of a problem is this in practice?

Again, I promise you there are developers just waiting to shove
UTF-16LE and what they think of as "ASCII" into this function
for no reason other than the lack of a document saying there's
a more interoperable way.

> I've seen this issue come up for debate in the past, and 
> the general approach has always been that there's lots of hand
> -wringing, no-one can agree on what's best, some token words 
> are added to a spec somewhere and near-universally ignored, 
> and then life goes on as normal without the world ending.

I actually think that poor interoperability of password based
credentials currently is a *real* problem. It poisons the security
ecosystem by teaching users it's a bad idea to use a wider set
of characters in their password.

Of course this isn't the only reason users choose weak passwords,
but at it's one we could try to do something about.

> So you could just say:

> Implementers should be aware of potential interoperability problems 
> due to character-representation issues and, if cross-platform 
> portability for a wide range of character types is an issue,
> use appropriate encodings such as Unicode or UTF-8.

Not bad, but I have a couple of comments:

1. Unicode isn't an encoding. I'm not just being pedantic, a lot
of code I work with literally has "#define UNICODE" to mean
UTF-16LE (or even UCS-2) when most things are adopting UTF-8.

> That's good enough, it alerts developers to the potential
> issue but leaves it up to them as to how they want to deal
> with it.

2. We can't expect developers to know in advance if interoperability
will potentially be a problem. Developers are usually asked to "please
get this specific use case working as soon as practical". But the
way the world evolves in the longer term is that databases of credentials
get re-purposed for authenticating many different systems. (Kerberos,
LDAP, OAUTH2, etc.) For this we need standards.

This is why I favor the IETF-style approach of the spec being
opinionated (at the RFC 2119 SHOULD level) on what's required
for interoperability.

- Marsh

Powered by blists - more mailing lists