[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.61.0503291645000.695@forced.attrition.org>
Date: Tue Mar 29 23:01:46 2005
From: jericho at attrition.org (security curmudgeon)
Subject: windows linux final study
: Yes, but did you actually verify their research using their methodology
: to see if they screwed up?
Personally no, and I won't =)
: As in any study, the methodology and assumptions control the result. You
: can either poke holes in their methodology (for example by pointing out
: that the use of only published results is not a true indication of their
: security, in which case the eEye list of purported flaws is relevant) or
: you can use their exact methodology to recreate the work and prove that
: their data collection was wrong.
Exactly. Perhaps I should have expanded on that toward the end, but my
comment about mincing words refers to this. Like the previous Microsoft
funded report in 1999 [1], a company named Mindcraft set up a test to
compare Linux vs Windows NT Server 4.0 and judge their "performance"
(which was also rebutted fairly well [2]). The two things that Mindcraft
didn't mention at first were 1) Microsoft funded the test and 2)
Mindcraft's advertising completely destroyed their own credibility:
"With our custom performance testing service, we work with you to define
test goals. Then we put together the necessary tools and do the testing.
We report the results back to you in a form that satisfies the test
goals."
So like you said, if you design the methodology with the intent of
reaching certain results, you can't trust it as anything but a glorified
marketing brochure. Instead of pointing that out more clearly (as I
probably should have), I skipped past that to point out Microsoft is not
fast to patch. Of course they have a < 30 day response time when they
coordinate disclosure with these companies, many of which get advanced
copies of their software and don't want to burn bridges, and many of which
follow ethical disclosure (to a fault sometimes). In the real world,
discovery of a vulnerability isn't limited to one person or company. We
have seen this occur several times, where NGSS, eEye and iDefense [3]
discovered the same vulnerability, and the resulting advisory w/ patches
came out months later [4]. During that time frame, we can't expect or
assume that no one else discovered the issue and/or used it for their own
ends. Focusing on the time between public disclosure and patch release is
a red herring (in this case).
: I guess what I'm saying is that you can't say the study is wrong if they
: release and follow their own methodology, but you CAN say its just plain
: not relevant due to the assumptions and methodology. And, lets not
: forget, the person who FUNDS the study probably CONTROLS the
: assumptions!
In an ideal academic setting, I can't say the study is wrong if their
methodology is followed and results published. In reality, if I factor in
other things (like you just did with RedHat and 'official' patches), I can
say the study is wrong. There is often a serious disconnect between
academia and the rest of the world who actually deploys these platforms,
and that is lost in these 'studies'.
jericho
[1] http://www.mindcraft.com/whitepapers/nts4rhlinux.html
[2] http://lwn.net/1999/features/MindCraft1.0.php3
[3] http://www.nextgenss.com/advisories/realra.txt
http://www.idefense.com/application/poi/display?id=109&type=vulnerabilities
http://www.eeye.com/html/research/advisories/AD20040610.html
[4] http://www.service.real.com/help/faq/security/040610_player/EN/
Powered by blists - more mailing lists