[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3E39B938.60606@thievco.com>
From: BlueBoar at thievco.com (Blue Boar)
Subject: Was: Full Disclosure = Exploit Release - No
disclosure No Fix
yossarian wrote:
> Maybe standardisation in disclosure would help the statistically inclined:
>
> OK, so what would be useful:
> date of discovery, date of 1st notification, date and summary of response,
> date of fix release - if any, single point of security notification at
> Vendor Y/N. This should be the bare minimum, I guess. Quality of fix would
> be nice, since some fixes don't fix. Fix vs. workaround could be useful,
> disabling a certain feature is IMHO no fix, even if the feature is generally
> considered useless - it might be useful to some, if it ain't, remove it. I
> don't think a server containing this info should be in the US, since it will
> be damaging to certain commercial interests.
That would be a good set of things to have. In general, I think it would
be quite possible to build it, given resources (time, mostly.) Some
vendors might even be inclined to fill those fields in when they release
their patch. I don't expect the vendors who suck at getting things out
quickly to do so, though. :)
I wonder if that sort of thing could be added to the OSVDB or CVE?
> Well, you'd have to monitor all covert channels on the web to find out.
> Maybe Carnivore will help? Nah. Not realistic, i suppose.
I'm hoping that we'll get useful, widely-deployed anomalay-based detection
one of the days, and start catching some 0day.
>> -How often did someone "need" someone else's script to break into a box?
>
>
>
> With only a estimated meagre 2% of attacks detected, who can tell? Define
> attack. And not all attacks are scriptable, only certain types - you can
> only script for flaws in systems used, not for all possible network design
> or system implementation errors. Could you imagine an midsize IT company
> setting up VPN to its corporate network without enabling encryption - I have
> seen it. Can you imagine a major Telco using open shares where any connected
> user has admin - No you wouldn't but it has happened - and has not changed
> yet. Would you write a script for that - unlikely. Competent people usually
> cannot foresee nor understand the errors made by the others. Of course some
> people need scripts, but stuff like nessus is hardly usefull unless you can
> code your own script. At this moment, Nessus can test 1165 different vulns,
> gathered over the years. I once did some research, and I found out that over
> a period of 6 months, 588 new vulnerabilities were posted on major lists.
> (april - sept 2001). So Nessus holds less than 1 years worth of vulns
> announced. Of course these are testing scripts, not actual sploits, but
> count the number of sploits on supposedly black hat servers. At most 200 per
> year emerge. Many go for the same vulns, quite a few do not work under any
> circumstances. Maybe the working ones are used to break hundreds of boxes,
> who knows - but not likely. I think it is very hard to prove the relation
> between available scripts - mostly just probing scripts - and actual
> attacks. Unless you consider a probe an attack. If it is very hard to prove,
> chances are (Occams Razor) that there is no relation. The only case you can
> make, is for viruses, where toolboxes can empower the dim. But this is not
> breaking into a box.
Anything that could be done by "hand" means they didn't need someone's
tool. The question I was getting at... is if an exploit weren't available,
how often could the person who is inclined to break in able to write their
own? How explict an advisory would they need?
> Would be Nice to know, but I think to hype does not help. Where do these
> hypes happen - on dedicated lists. They are not read by the average admin.
The level of "hype" I had in mind was when the problem is bad enough that
traditional media covers it, which is a rare small percentage of security
problems, indeed.
>
> To this list of unanswereable questions I could add the ratio of security
> fixes with or without preceding full or half-full disclosure.
That would be interesting, but most closed-source vendors have an interest
in keeping most details to themselves. Microsoft has discovered things
internally a few times, and released advisories, probably because they felt
it was serious enough that they wanted to emphasize the need to patch. They
give basically no details, and you have to disassemble the binaries before
and after if you want to know what the hole is.
> BTW, whatever
> happened to disclosing 'somewhere inbetween', where only skilled people
> could understand the technical details of an advisory and turn it into a
> script?
Since it's usually skilled people that are discussing that point, I think
they all tend to assume that kind of trick is completely useless. It may
or may not be, but since most of the participants can fix it, it's hard to
see it as an effective measure. Plus, the community will still give points
if someone posted a fixed or enhanced version, so it's not likely to stay
obscure very long.
>
> So I guess we by this venue can never prove if full disclosure is a good
> idea. But maybe it is not the correct question - we want vendors to build
> safer software, not prove we can find holes and quibble about the credits,
> commercial interests etc. Full disclosure is just a means to an end, and we
> cannot see the end getting any closer. Hence my suggestion of a benchmark,
> available to all.
Yes, exactly. The second question is, once people have a real measure of
who is more secure, will they care or do anything different?
Full-disclosure really only benefits the vigilant security-minded, in the
short term. I suspect such stats would just do the same.
> How do we make it a benchmark (i.e. understandable for the many, and useable
> for the consultant types). A software company, will it be judged on the
> number of programs they sell, the number of lines of code (say security hole
> per 1000 lines of code?), should the type of program be taken into account -
> with a reverse bonus if a program does not need external communication but
> still has remotely exploitable holes? (completely flawed versus just
> security flawed).
>
> Other probs
>
> Researchers follow trends and find the type of vulns they are looking for.
> Remember when embedded or default password were 'in'. Hardly see them
> anymore - now it has to be buffer overflows, header manipulation or unicode
> related. Does this mean the older types of flaws aren't there anymore? If
> true, it would prove that the industry is getting better at securing their
> products. If false, it just proves the few researchers find flaw any
> direction they care to look at, and many types of vulns will pass completely
> unnoticed, since they can only look in one direction at a time.
Yes, broad review isn't currently rewarded, only sexy vulnerabilities. The
Sardonix project might help with that, if it ever kicks off.
>
> Another major security problem I have never seen adressed here, nor anywhere
> else, is that the majority of people on the web is no longer native english
> speaking. (US 40%, plus some in UK, Canada, Down Under and in India, might
> just be half. Is this a security problem? I have noticed from reading
> security lists in other languages, that sometimes the vulns are not at all
> posted on the big, english forums. This will increase in the future. How to
> explain a complex hole in some american software to a vendor, if you can
> barely manage the language? They will not bother. So in the future, the
> vendor notification will drop.
>
> Research of few years ago (1998 if I am correct) proved that systems are
> significantly less patched in countries were use of english is less common.
> Server software, databases - many of the common programs or books are not
> translated. Advisories are not translated. Bugtraq, this list and readme
> files with updates and hotfixes are in english. I have done networks in
> Italy and France - well, the software was in English but the admins could
> rarely make a remotely understandable sentence, let alone notify a vender
> (or read the manuals). When the software was in the native language, there
> were no language specific patches, and the english patches didn't work
> properly, or changed part of the language in the menu's to english - or
> both. The security industry is also mainly american, or english-centered.
> The SQL worm proved again that secured shops can be hurt by unpatched
> systems elsewhere. Security has always been an uphill battle, but it is
> getting steeper.
>
> The security of the Net therefore will decrease, if vendors don't extend the
> support to other languages, and independent books are not translated - since
> it is not commercially viable. And the feasability of a complete list of
> known exploits will also decline. If 1 million chinese know, but no one
> cared to translate, does it qualify as a know exploit, or will it be a 0Day?
>
> My guess is that the best thing security researchers can do for the long
> term, is learn chinese. Especially if the corporate marketing strategy
> prescribes postings vulns - translating is a lot easier than crunching code.
Yes, very interesting point. I was thinking about this over the weekend
when Sapphire hit. The exploit code was posted on the CNHonker site. I
had no idea... I don't read Chinese. I can read the exploit just fine, but
I'm not in the habit of drilling down randomly through foreign-language
security sites on the off chance.
Someone could probably kick every other vulnerability database's ass by
employing multilingual researchers.
BB
Powered by blists - more mailing lists