lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9A043F3CF02CD34C8E74AC1594475C73F4B33020@uxcn10-5.UoA.auckland.ac.nz>
Date: Mon, 12 Oct 2015 13:32:32 +0000
From: Peter Gutmann <pgut001@...auckland.ac.nz>
To: "discussions@...sword-hashing.net" <discussions@...sword-hashing.net>
Subject: RE: [PHC] Specification of a modular crypt format (2)

Alexander Cherepanov <ch3root@...nwall.com> writes:

>-ftrapv is great for testing (e.g. under fuzzer) or for code which checks for
>overflows before operations and hence any overflow means a bug in the program.
>If you want wrapping use -fwrapv.

I prefer Visual Studio's overflow/underflow checking, mostly because of the
integrated IDE which means you can fix the problem as soon as it occurs.

>Are you sure? When I run testlib from cryptlib 3.4.3 compiled with '-m32 
>-fsanitize=undefined' I get:

Ah, I've only run that with the build for the native environment, which is
invariably x64 not x86.  I'll have to add the -m32 version to the test
process.

>I've used gcc 4.9 here which doesn't have an option to crash on ubsan errors.
>With newer gcc or clang you can fuzz directly with ubsan.

I just use LLVM...

>This blogpost also mentions an interesting option -Wstrict-overflow. 

I already use that.  Problem is that you get close to zero warnings about it
optimising anything away, I do get a few warnings about potential overflow but
can't see how what's being warned about could overflow.  So there could be any
number of places where it's breaking the code without warning about it, and
several locations where it's warning that it's going to break the code based
on what is, as far as I can tell, a false positive.

>You seem to overly concentrate on static analysis. Perhaps try more dynamic
>analysis?

I use every type of analysis I can get my hands on, the more the better.
Static is better because it finds conceptual errors in the code (Coverity
performs lots of analysis from first principles while Prefast works best if
you have heavily-annotated code, so you get quite different results), while
dynamic only finds problems that you can induce via the appropriate input to
the program.  A bigger problem with dynamic analysis is the potential to
consume unbounded amounts of CPU, so unless you've got access to a private
server farm it's difficult to run...

>What are alternatives? 

Doing what Microsoft appear to do, which is apply common sense to the
behaviour of their software, rather than using an overly literal
interpretation of the spec to justify breaking people's code.  By this I don't
mean allowing null pointers to be deref'd, but not following the premise that
"I've seen signs of potential UB, so now I can do whatever I want to the
code".

>If you don't aim at standards-compliance your code is doomed to be broken by
>compilers.

Hmm, that's a bit like saying that car manufacturers can leave out seat belts,
crumple zones, and airbags from their cars because the standards (driving
rules) say you shouldn't crash your car -> If you aren't standards-compliant
and crash your car, your life is doomed to be broken.

Making the post at least vaguely informative for others, some thoughts on
various tools for code analysis:

cppcheck: Good for general style analysis, but doesn't do the in-depth flow
analysis of things like Coverity and Prefast.

Coverity: My favourite static analyser, mostly because they put so much work
into getting rid of FPs.

Fortify/Klocwork: Alternatives to Coverity, but lots (and lots) of FPs.

clang: A lightweight, but also free, alternative to Coverity.

Prefast: Finds lots of things that other analysers don't, but requires a lot
of code annotation to be effective.  Has also improved greatly in recent years
(less incomprehensible error messages, less FPs, now actually makes use of
annotations for things like range checks).  Use of annotations is under-
documented (you need to reverse-engineer header files to find out how to use
some features), and in particular by-reference parameters seem really hard to
annotate.

Bounds Checker: My favourite dynamic analyser (because it's integrated
directly into Visual Studio so it comes for free with your development
process), but has been systematically strangled by Micro Focus for years.

ASAN/ubsan: Next-favourite after BC.

Valgrind: Formerly next-favourite after BC, but recently replaced by
ASAN/ubsan which have improved markedly in the last year or two.

AFL: Great for finding all the corner cases that standard test suites don't
get to.

(Others): Those are the ones that spring to mind, I've tried a pile of others
but found those to be the most useful.

Peter.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ