lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
From: full-disclosure at royds.net (Bill Royds)
Subject: Coding securely, was Linux (in)security

----- Original Message ----- 
From: "Paul Schmehl" <pauls@...allas.edu>
To: <full-disclosure@...ts.netsys.com>
Sent: Sunday, October 26, 2003 8:39 PM
Subject: [Full-Disclosure] Coding securely, was Linux (in)security


> --On Sunday, October 26, 2003 8:04 PM -0500 Bill Royds <broyds@...ers.com>
> wrote:
>
> > You are saying that a language that requires every programmer to check
for
> > security problems on every statement of every program is just as secure
as
> > one that enforces proper security as an inherent part of its syntax?
>
> Well, no, that's not at all what I'm saying.  What I'm saying is that, no
> matter how well the language is devised, programmers must still understand
> how to write secure code and be aware of those places in the code where
> problems can arise and prevent them.
>
  Yes, programmers need to know how to program. But the programming language
should not make it difficult to write secure code, but easy.
C makes it hard to write secure code. Rather than the programming language
automatically checking for buffer overflows or not even allowing the
siutation that creates a buffer overflow (and so not costing overhead in
checking)
   In C ,"places in the code where problems can arise" is practically
everywhere.
In Ada or Eiffel or ... , those places are at very specific points, like
calling interfaces or input statements.
Programming securely in C requires every programmer to be perfect to achieve
secure code. Even code written by very good and security aware programmers,
such as OpenSSH has been found to have security problems. If you look at
security advisories, find out how many come from Ada  code. C makes it hard
to write secure code.
   I know that one can't get rid of C overnight. But at least create C
compilers that restrict it to more secure constructs, not just in no calls
to sprintf or strcpy or memcpy, but language definitions that don't allow
uncounted strings to be passed as arguments or actually checks that arrays
are arrays and not pointers. THis breaks the present C language definition
so we could perhaps create a C- language that gets rid of unbounded strings,
arrays as pointers (which would also make the language more efficient),
prevents use of pointers after malloc fails etc. The language that one
writes code in should help security not hinder it.





>
> [snipped a bunch]
> > I have been programming in C since the 70's
> > so I am quite aware of what the language can do and appreciate its
power.
> > But the power comes at the price of making it much more difficult to
> > handle the security and readability of C code. Since one can do just
> > about anything in C, the language compilers will not prevent you from
> > shooting yourself in the foot. Other languages restrict what you can do,
> > to prevent some security problems.
> >    But there is so much code out there that is written in C (or its
> > bastard child C++) that we are not going to get rid of it soon. Java
> > would actually be a good language if Sun would allow one to write
> > compilers for it that generated native machine language, not just Java
> > byte code.  But the conversion of the world programmer mindset to
> > restricting use of insecure language features will take eons so I give
it
> > no hope.
> >
> So which makes more sense to you?  To convert the world's programmers to a
> new language?  Or to teach them to code securely?  Surely, if we were to
> replace C today, they would just find other ways to write insecure code?

   If we replaced C today, it would take effort to write insecure code. Now
it takes effort to write secure code.
I hope you would prefer to make secure code easier to write.




> >
> > A programmer certainly can not know what his pointers refer to. That
would
> > require the writer of a function to know all possible circumstances in
> > which the routine would be called and to somehow prevent her routine
from
> > being linked in with code that calls it incorrectly. That is often
called
> > the halting problem. Most security problems come from exactly the case
> > that the subroutine user "knows" what are the arguments for all calls in
> > the original use and handles those. The infinity of all other cases can
> > not be checked at run time without either significantly slowing down the
> > code or risking missing some.
>
> But it shouldn't be the job of the writer of a subroutine to verify the
> inputs.  The writer of a subroutine defines what the appropriate inputs to
> that routine are, and it's up to the *user* of that subroutine to use it
> properly.  The entire concept behind OOP is that you cannot know what's in
> the "black box" you're using.  That makes it incumbent on you as the
*user*
> of a subroutine to use the correct inputs and to *verify* those inputs
when
> necessary.
>

   You are advocating writing insecure code. By definition, someone trying
to break into a routine DELIBERATELY writes inappropriate inputs to break
it.. There is only a finite numer of valid inputs to most routines. A secure
subroutine needs to NOT work on invalid inputs, but to reject them in a
secure manner.
A good programming language restricts the inputs at the compile statge and
can check for type conformance at compile, link or, as a last resort, call
time. C doesn't guarantee anything to the called routine. It assumes that
the caller is benign and wants to use the routine as it is designed. It has
little defence against a caller that wants to subvert the routine.
   A more strongly type enforcing language like Ada or even Delphi, checks
that arguments and parameters are correct at least at link time or puts code
in the call to prevent calls with types not compatible to parameters.
To have a compiler handle this for a C function, it needs to do full
information analysis on every possible  data path through the routine,
rather than just checking at call time.



> Now a subroutine writer is prefectly free to do error checking if they
> choose, but the user of that subroutine should never *assume* that the
> subroutine *does* do error checking.
>
> >    The recent MSBlaster worm is a case in point. The writers of the RPC
> > code "knew" that code within MS Windows never passed more than a 16
> > unicode character (32 byte) machine name as one of its parameters so did
> > not need to check ( the argument was not of type wchar * but of type
> > wchar[16]). Since C actually does not implement arrays at all, but only
> > uses  array syntax [] as an alias for a pointer, the only way to prevent
> > buffer overflow in a C routine is to never allow string arrays  as
> > parameters to functions, complete obscuring the meaning of code.
> > The problem is that C encourages bad coding practice and obscures the
> > actual meanings of various data structures and even the code auditing
> > techniques of the OpenBSD crowd do not find all the possible mistakes
> > A language will never be goof-proff, but it should not make it easier to
> > goof than be correct.
> >
> I'm not disagreeing with this point at all.  I'm simply saying that
> programmers *must* verify inputs when they cannot be known.  In this
> particular example, you're pointing out a classic mistake.  The
programmers
> of the RPC code *assumed* that they knew what the input would be when in
> fact they could not *know* that for certain.  And so we ended up with
> another classic example of a buffer overflow (actually several).
> Assumptions are the mother of all problems.
>
> You complain that the code would be really slowed down if consistent and
> complete error checking were done.  I wonder if anyone has ever really
> tried to write code that way and then tested it to see if it really *did*
> slow down the process?  Or if this is just another one of those "truisms"
> in computing that's never really been put to the test?
>
> BTW, in my example, I didn't use strlen.
>
> Paul Schmehl (pauls@...allas.edu)
> Adjunct Information Security Officer
> The University of Texas at Dallas
> AVIEN Founding Member
> http://www.utdallas.edu
>
> _______________________________________________
> Full-Disclosure - We believe in it.
> Charter: http://lists.netsys.com/full-disclosure-charter.html


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ