[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4465E09A.5040506@rs-labs.com>
Date: Sat May 13 14:35:46 2006
From: roman at rs-labs.com (Roman Medina-Heigl Hernandez)
Subject: How secure is software X?
Lucien Fransman wrote:
> I often wondered about this. An assessment is only as good as the assesser.
> What is the use of a "i can break and exploit $foo application, and have
> shown this in my tests", if it is done by a private exploit? Again, i'm
[...]
> It only shows that the application has a bug, that is known to you or your
> company. Will it benefit the company that is being tested? I am not so sure
> about this. What would a company do with this kind of information? Fix the
> bug? They can't because they dont have access to the source. Will it entice
> the vendor to fix the vulnerability? No, as they dont know it exists.
It shows that the company's security could/should be improved. Security
must follow a layered approach. Perhaps you cannot fix the real problem
(i.e. the bug) but you can avoid it to be exploited/abused. For
instance, the pentester *probably* couldn't have exploited that 0-day
overflow in that Linux critical server if it had Grsec running on. Or
the pentester couldn't have exploited that 0-day SQL injection bug if
Apache had been configured with some kind of application firewall
(ModSecurity, etc). Or this hacked-by-0day server couldn't have been
even accessed if the network were propperly segmented/firewalled. Or my
IDS noticed the 0day exploitation (not very sure of this }:-)). Or I
couldn't avoid the compromise but I had it logged to my syslog
bullet-proof server. Or... <etc, etc>
If I were a company, I'd like to know that I survived to 0day exploit
attempts or at least I detected them. It's an important added value for me.
--
Saludos,
-Roman
PGP Fingerprint:
09BB EFCD 21ED 4E79 25FB 29E1 E47F 8A7D EAD5 6742
[Key ID: 0xEAD56742. Available at KeyServ]
Powered by blists - more mailing lists