lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri May 12 19:55:46 2006 From: lucien.fransman at irc2.com (Lucien Fransman) Subject: How secure is software X? On Friday 12 May 2006 05:20, Blue Boar wrote: Hello, > Do you want just public results of standardized blackbox testing? > Something similar to the ICSA firewall certification? (Though, I assume > you want actual public results.) That would be ideal. properly anonimized ofcourse. It would be nice to have a list of applications that pass the "Litchfield" criteria. And software vendors would have a new marketing tool :) (Passed the "Litchfield" criteria, so its better than product $foo, who failed 80% of the test) > > Would you include source review? The Sardonix project tried to do that. Well, ideally, that would be the case. But it would give an unfair advantage to software that has its source available. I would think its a nice extra, but not a requirement. > > Who does the testing, and who pays for the time and equipment to do > that? Do all products get re-tested every time a new version of the > product suite is released? Do the test suites have to be free? Do they > re-test for every release of the victim software? Well, the tests get done anyway. Why not bundle the results. The time spent at customer $bar is billed time anyway. using the (anonimized) results depends on NDA contracts and the like, but shouldn't pose a risk for that customer. And it would shorten the testing time in the future anyway. Because the pentester knows what the results of others are, and he only has to verify them. > > Don't people like yourself derive some benefit from having some portion > of your assessment work stay proprietary? If I'm trying to enhance the > test suite with some new fuzzing, and I find a sexy bug, don't the > incentives tend to lean towards me selling the bug to iDefense and > hiding my fuzzer in the meantime? I often wondered about this. An assessment is only as good as the assesser. What is the use of a "i can break and exploit $foo application, and have shown this in my tests", if it is done by a private exploit? Again, i'm thinking from the position of a company hiring a pentester/assesser, not by the multitude of people trying to gain from exploiting a 0day. It only shows that the application has a bug, that is known to you or your company. Will it benefit the company that is being tested? I am not so sure about this. What would a company do with this kind of information? Fix the bug? They can't because they dont have access to the source. Will it entice the vendor to fix the vulnerability? No, as they dont know it exists. In my opinion, using private exploits and private vulnerabilities as a pentester/assesser only spreads FUD, and nothing really constructive is being done. All this acomplishes is that the company who has the biggest archive of unpublished exploits, PoC code and vulnerabilities has a bigger chance at presenting a "we can break into your system 100% of the time" graph during sales talks. > > Don't we fairly quickly arrive at all products passing all the standard > tests, and "passing" no longer means anything? It means something. It either means that the fuzzers or the testing technique is out of date, or that the applications being tested that pass have at least put some thought into the whole security process. Its a nice way to create a baseline. The application passess the default tests. that means that the applications have a minimum level of security. > > I like the idea, but I'm wondering why people would contribute. I'm > also wondering how it can it stay consumer-beneficial, and not end up > being driven by product vendors. For the same reason as why OWASP and OSSTMM are successes. They dont get a lot of airtime, but everybody in the field knows about them. Everybody agrees (to a certain point) that they are useful. What would happen if we didn't have initiatives like this? There would be no framework, no comparison between methodologies and no standards. The product vendors play a role, but I see it as the task of the people creating the standards to avoid it being a vendor platform only. And software vendors are not the enemy. Anyway, these are my thoughts on this. The default disclaimers apply (not nessecarily the view of my employer, yadayada) > > BB Enchanter_tim
Powered by blists - more mailing lists