lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Thu, 7 Oct 2004 12:38:23 -0700
From: "Drew Copley" <dcopley@...e.com>
To: "Windows NTBugtraq Mailing List" <NTBUGTRAQ@...TSERV.NTBUGTRAQ.COM>,
   <full-disclosure@...ts.netsys.com>, <bugtraq@...urityfocus.com>
Subject: RE: Disclosure policy in Re: RealPlayer vulnerabilities


 

> -----Original Message-----
> From: Windows NTBugtraq Mailing List 
> [mailto:NTBUGTRAQ@...TSERV.NTBUGTRAQ.COM] On Behalf Of Martin Viktora
> Sent: Thursday, October 07, 2004 11:04 AM
> To: NTBUGTRAQ@...TSERV.NTBUGTRAQ.COM
> Subject: Disclosure policy in Re: RealPlayer vulnerabilities
> 
> Apparently, both Eeye Digital Research (US software security company)
> and NGS Software Ltd (a UK based research firm) claim credit for 
> discovering the recent vulnerability in RealPlayer. This might not 
> be as interesting as the fact how the two companies decided to inform 
> about the vulnerability. While NSG took responsible approach, quote:
> 
> > NGSSoftware are going to withhold details about these flaws 
> for three
> > months. Full details will be published on the 6th of 
> January 2005. This
> > three month window will allow users of RealPlayer the time 
> needed to apply
> > the patch before the details are released to the general 
> public. This
> > reflects NGSSoftware's new approach to responsible disclosure.
> 
> Eeye went ahead and released technical details about the 
> vulnerability 
> just a few days after the vendor made the patch available. 
> Many of you 
> may remember another vulnerability disclosure made by Eeye in 
> March 2004 
> when they released technical information about a flaw in ISS security 
> products (ICQ parsing module) that was followed by a 
> "zero-day-attack", when 
> in 36 hours a particularly damaging "Witty" worm struck users 
> of ISS products
> (The worm damaged users' data by writing over random hard 
> disk sectors).

[I hesitate to even respond, as outrageous as this post is, as it is
clearly obviously wrong to anyone who understands the real process of
security bug disclosure... the only problem I know is, many, many people
do not yet fully understand this process and will be led astray by
fearmongering men like Martin...]

[Disclaimer: This, is of course, my own opinion. Hold it as that.]

Yet another armchair critic attempting to spread his irrational
fearmongering to other people.

The truth be told the "Witty" worm was an extremely advanced worm of the
highest caliber. Whoever wrote that was perfectly capable of reverse
engineering the patch and finding the fixed issue. If I recall correctly
it was something extremely easy to spot in code auditing, like an
unbounded sprintf. (Yes, it was confirmed with the researcher...)

Look, these people will not tell you the truth, but here it is:
unbounded sprintf's take about two seconds to find in code. There are
many very simple to use an automated tools to use to find these. 

The only reason this bug was not found before was that it was simply not
looked for. Maybe it was an "unbelief" factor. Very often outrageous
bugs creep into software and miss checking simply because no one could
believe it was really there. Once people found this sucker and reported
it, though that removed all boundaries. You don't even have to do any
kind of complicated cross checking of binaries -- just do a sanity check
with the simplest of automated tools!

Let me note something else: any decent firewall or vulnerability
assessment company is going to immediately reverse engineer patches of
any product in order to best protect their customers. This means Martin
here either reverse engineers patches within the day at his firewall
company or he does not properly protect his customers at all. What I
want to know is why is he pretending this reverse engineering process is
some kind of black art that no one knows about when he knows full and
well even junior researchers can do it?

Or, does he simply not know this?

Regardless, people, let us be guided by reason and not by fear.

<snip>


> 
> While I completely believe in "full disclosure" as the only 
> way to ensure 
> that software vendors take security seriously and act quickly 
> to resolve 
> security issues

No, you do not believe this.

We could use a lot less pretentious fakes in this world, especially in
the security industry.


>, even if it means that cyber criminals are 
> given instructions 
> how to write malicious code and attack, the security industry 
> needs to 
> cultivate the way how vulnerabilities are published.

Anybody that can write shellcode can figure out how to compare binaries
and figure out where the hole is.

It is the administrators and protection companies such as your own that
need full disclosure to help protect them from these malicious hackers.




> 
> Vendors often need more than the typical 30 days ultimatum 
> given by security 
> researches. 

As our "upcoming advisories" page has shown, we give vendors whatever
time they need to fix security bugs... even if that takes well over a
year.

ISS on the otherhand - who your company is partnered with - has been
known to relese vulnerabilities immediately, without even the vendor
having time to issue a fix. THAT is crossing the line.

(Reference: the Apache chunked encoding bug controversy)



> Depending on the scope and nature of the 
> vulnerability a vendor 
> may need more time to test the patch and make sure that it 
> works correctly. 
> And then there is the whole issue of delivering the patch to 
> the customers. 
> Even in the ideal case when the patch can be delivered 
> relatively quickly via 
> some kind of automated update system, many companies opt to 
> test the patch 
> internally and delay its deployment (as we saw with XP SP2).
> 

Everyone knows a vendor should have more then 30 days.

Do they absolutely need more then 30 days? No!

In fact, vendors need to work much, much harder at reducing this time.
There are actually a lot of people out there who think 'no one has zero
day'. This is absurdism at its' best. Every single on of our 'upcoming
advisories' is zero day. 

That is not fearmongering or crying wolf -- that is just the facts.



> What I am calling for is that security researches take 
> responsible approach 
> in releasing information about security vulnerabilities, 
> similar to NSG 
> release policy. 

NGS.

I think it is highly unethical to not be critical about long downtimes
for fixing security bugs. 

Some companies do fix their bugs very fast. Some do not. I have seen a
variant of one of my bugs be found within a week -- even though the
vendor took four months to fix the bug. What were they doing with that
time?

The bottomline is... vendors are not duplicating effectively yet the
same effort found in the full disclosure community.

The full disclosure community clearly does not need to follow
incompetent QA practices of vendors: but, the QA of vendors are sorely
in need to follow the very competent practices of the global, full
disclosure community.

Vendors that test and fix their security bugs faster are more effective
vendors. 

It is a true shame vendors are getting blasted for having bugs, solely,
as opposed to having bugs and not fixing them in a timely manner.


> With zero-day-attacks, it is no longer 
> possible that technical 
> details are published about the same time the patch is made 
> available. 

Again, you simply restate the wrong as if by saying it many times makes
it true.

This is not the case.


> An industry accepted standard defining information release 
> steps and time 
> constrains is necessary here so that both vendors and 
> customers are given 
> enough time to make sure that they are secure before 
> technical details 
> (=instructions how to write malicious code) are released.

Any industry accepted standard based on such inaccurate facts and
fear... would be entirely detrimental to the security of the world's
information systems.

You would strive to keep this information only from people who need to
have this information. 

The only ones left with this information would be hackers themselves.
And, contrary to how these guys would portray it: I assure readers who
are unfamiliar with these still unfortunately obscure practices that it
is absolutely trivial for anyone that can write shell code (much less so
some advanced ASM virus) to reverse engineer a fix.

That is not crying wolf, that is the simple facts. Extremely simple
facts.

I can not be ever more ardent about this: the unbounded sprintf which
was the basis for the ISS security hole was an extremely obvious
security hole. The 'witty worm' was a really nasty worm. I whole
heartedly agree there. But, anybody who wrote that could have easily
tracked back the fix to this unbounded sprintf with or without
disclosure. However, many administrators and security software vendors
would not have had such an easy time of it.

This whole debate is about smoke and mirrors. 

> 
> Martin Viktora
> 
> --
> NTBugtraq Editor's Note:
> 
> Want to reply to the person who sent this message? This list 
> is configured such that just hitting reply is going to result 
> in the message coming to the list, not to the individual who 
> sent the message. This was done to help reduce the number of 
> Out of Office messages posters received. So if you want to 
> send a reply just to the poster, you'll have to copy their 
> email address out of the message and place it in your TO: field.
> --
> 

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.netsys.com/full-disclosure-charter.html


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ