[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <NDBBKKOCALIBPMFFNMEMKEBDENAA.cseagle@redshift.com>
From: cseagle at redshift.com (Chris Eagle)
Subject: Coding securely, was Linux (in)security
Brett Hutley wrote:
> Sent: Sunday, October 26, 2003 7:44 PM
> To: Paul Schmehl
> Cc: full-disclosure@...ts.netsys.com
> Subject: Re: [Full-Disclosure] Coding securely, was Linux (in)security
>
>
> Paul Schmehl wrote:
>
> *snip*
> > You complain that the code would be really slowed down if
> consistent and
> > complete error checking were done. I wonder if anyone has ever really
> > tried to write code that way and then tested it to see if it really
> > *did* slow down the process? Or if this is just another one of those
> > "truisms" in computing that's never really been put to the test?
>
> Yup. I work on large distributed systems for financial risk management
> processing. We have some very tight calculation loops with preallocated
> buffers because we can't afford to do any unnecessary stuff in these
> loops. Because they are buried deep in the calculation engine we don't
> need to worry about validating the input. An unnecessary piece of code
> here makes the difference between the job taking 1 hour to process or 10
> hours. There are some circumstances where tight code is essential. Of
> course in MOST systems the speed of execution is not that critical.
>
At best this sort of coding is appropriate when functions are tightly
coupled and not exported. It would of course behoove you to attempt to
prove that the parameters being passed around never go out of range. Its
publicly exported functions that fail to validate parameters that worry me.
Chris
Powered by blists - more mailing lists