[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20041008014414.fb9292c3@mail.kerio.com>
Date: Fri, 8 Oct 2004 01:44:14 -0700
From: Martin Viktora <mviktora@...io.com>
To: Windows NTBugtraq Mailing List
<NTBUGTRAQ@...TSERV.NTBUGTRAQ.COM>,
full-disclosure@...ts.netsys.com, bugtraq@...urityfocus.com
Subject: Re: RE: Disclosure policy in Re: RealPlayer
vulnerabilities
Brian,
First, you wrote that I do not really believe in "full disclosure"
even though I clearly stated I am for it. I find it a little bit
difficult to argue on that level of reasoning, but please allow
me to clarify what I tried to propose anyway: I truly believe that
vulnerability disclosure should follow these steps:
1. Vulnerability is discovered and the vendor is notified.
2. In time X, vulnerability existence is publicly announced
without giving specific details. Users are urged to apply the patch.
3. In time Y, vulnerability’s technical details are disclosed.
While this approach might not provide benefits in all cases, it
certainly should not hurt either. We can of course argue what are
the appropriate times X and Y but that would be for another
discussion and I would be happy if we got there.
So what are those benefits? I disagree with you that the value of
vulnerability’s technical details is worthless to the attacker
and that he can figure out the vulnerability as easily as if he
had that information. An application may consist of many binary
files that may be modified in hundreds of places. The vulnerability
might be a buffer overflow or some generic error in application’s logic
or just bad default configuration. Yes, eventually you will be able
to crack it but it is going to take time, determination and resources.
But with technical details, you are pointing out - here it is, in this
file, under precisely these conditions.
Without the technical details, depending on the motives, skills,
determination and resources of the attacker, one of the following will
happen:
1. It is really easy to figure out where the vulnerability is or the
attacker very skillful. He writes the malicious code and releases
it. Sad day for everybody.
2. It is not that easy to figure it out but the attacker is determined
and eventually he succeeds. However, any time we bought by not helping
him pays off. Every hour we won can mean thousands of system patched in
time. Not so sad day for hopefully a lot of people.
3. By not publishing details, we discouraged 200 script kiddies around
the world because it would be too much hard work and so much less fun.
Another 100 gave up after getting bored trying. No so bad day at all.
As can you see, I am not talking about the absolute security. But I say
if there is anything reasonable that we can do to prevent unnecessary
security incidents, we should try to do it.
Second, you say that vendors must work much harder at reducing patch
development time and I cannot agree with you more, especially after
what I stated above.
Third, anybody who is releasing information that may lead to unnecessary
security incidents is not doing a good thing. And that equally applies
to ISS’s release of Apache’s chunk encoding vulnerability.
Martin Viktora
> >
> > Apparently, both Eeye Digital Research (US software security company)
> > and NGS Software Ltd (a UK based research firm) claim credit for
> > discovering the recent vulnerability in RealPlayer. This might not
> > be as interesting as the fact how the two companies decided to inform
> > about the vulnerability. While NSG took responsible approach, quote:
> >
> > > NGSSoftware are going to withhold details about these flaws
> > for three
> > > months. Full details will be published on the 6th of
> > January 2005. This
> > > three month window will allow users of RealPlayer the time
> > needed to apply
> > > the patch before the details are released to the general
> > public. This
> > > reflects NGSSoftware's new approach to responsible disclosure.
> >
> > Eeye went ahead and released technical details about the
> > vulnerability
> > just a few days after the vendor made the patch available.
> > Many of you
> > may remember another vulnerability disclosure made by Eeye in
> > March 2004
> > when they released technical information about a flaw in ISS security
> > products (ICQ parsing module) that was followed by a
> > "zero-day-attack", when
> > in 36 hours a particularly damaging "Witty" worm struck users
> > of ISS products
> > (The worm damaged users' data by writing over random hard
> > disk sectors).
>
> [I hesitate to even respond, as outrageous as this post is, as it is
> clearly obviously wrong to anyone who understands the real process of
> security bug disclosure... the only problem I know is, many, many people
> do not yet fully understand this process and will be led astray by
> fearmongering men like Martin...]
>
> [Disclaimer: This, is of course, my own opinion. Hold it as that.]
>
> Yet another armchair critic attempting to spread his irrational
> fearmongering to other people.
>
> The truth be told the "Witty" worm was an extremely advanced worm of the
> highest caliber. Whoever wrote that was perfectly capable of reverse
> engineering the patch and finding the fixed issue. If I recall correctly
> it was something extremely easy to spot in code auditing, like an
> unbounded sprintf. (Yes, it was confirmed with the researcher...)
>
> Look, these people will not tell you the truth, but here it is:
> unbounded sprintf's take about two seconds to find in code. There are
> many very simple to use an automated tools to use to find these.
>
> The only reason this bug was not found before was that it was simply not
> looked for. Maybe it was an "unbelief" factor. Very often outrageous
> bugs creep into software and miss checking simply because no one could
> believe it was really there. Once people found this sucker and reported
> it, though that removed all boundaries. You don't even have to do any
> kind of complicated cross checking of binaries -- just do a sanity check
> with the simplest of automated tools!
>
> Let me note something else: any decent firewall or vulnerability
> assessment company is going to immediately reverse engineer patches of
> any product in order to best protect their customers. This means Martin
> here either reverse engineers patches within the day at his firewall
> company or he does not properly protect his customers at all. What I
> want to know is why is he pretending this reverse engineering process is
> some kind of black art that no one knows about when he knows full and
> well even junior researchers can do it?
>
> Or, does he simply not know this?
>
> Regardless, people, let us be guided by reason and not by fear.
>
> <snip>
>
>
> >
> > While I completely believe in "full disclosure" as the only
> > way to ensure
> > that software vendors take security seriously and act quickly
> > to resolve
> > security issues
>
> No, you do not believe this.
>
> We could use a lot less pretentious fakes in this world, especially in
> the security industry.
>
>
> >, even if it means that cyber criminals are
> > given instructions
> > how to write malicious code and attack, the security industry
> > needs to
> > cultivate the way how vulnerabilities are published.
>
> Anybody that can write shellcode can figure out how to compare binaries
> and figure out where the hole is.
>
> It is the administrators and protection companies such as your own that
> need full disclosure to help protect them from these malicious hackers.
>
>
>
>
> >
> > Vendors often need more than the typical 30 days ultimatum
> > given by security
> > researches.
>
> As our "upcoming advisories" page has shown, we give vendors whatever
> time they need to fix security bugs... even if that takes well over a
> year.
>
> ISS on the otherhand - who your company is partnered with - has been
> known to relese vulnerabilities immediately, without even the vendor
> having time to issue a fix. THAT is crossing the line.
>
> (Reference: the Apache chunked encoding bug controversy)
>
>
>
> > Depending on the scope and nature of the
> > vulnerability a vendor
> > may need more time to test the patch and make sure that it
> > works correctly.
> > And then there is the whole issue of delivering the patch to
> > the customers.
> > Even in the ideal case when the patch can be delivered
> > relatively quickly via
> > some kind of automated update system, many companies opt to
> > test the patch
> > internally and delay its deployment (as we saw with XP SP2).
> >
>
> Everyone knows a vendor should have more then 30 days.
>
> Do they absolutely need more then 30 days? No!
>
> In fact, vendors need to work much, much harder at reducing this time.
> There are actually a lot of people out there who think 'no one has zero
> day'. This is absurdism at its' best. Every single on of our 'upcoming
> advisories' is zero day.
>
> That is not fearmongering or crying wolf -- that is just the facts.
>
>
>
> > What I am calling for is that security researches take
> > responsible approach
> > in releasing information about security vulnerabilities,
> > similar to NSG
> > release policy.
>
> NGS.
>
> I think it is highly unethical to not be critical about long downtimes
> for fixing security bugs.
>
> Some companies do fix their bugs very fast. Some do not. I have seen a
> variant of one of my bugs be found within a week -- even though the
> vendor took four months to fix the bug. What were they doing with that
> time?
>
> The bottomline is... vendors are not duplicating effectively yet the
> same effort found in the full disclosure community.
>
> The full disclosure community clearly does not need to follow
> incompetent QA practices of vendors: but, the QA of vendors are sorely
> in need to follow the very competent practices of the global, full
> disclosure community.
>
> Vendors that test and fix their security bugs faster are more effective
> vendors.
>
> It is a true shame vendors are getting blasted for having bugs, solely,
> as opposed to having bugs and not fixing them in a timely manner.
>
>
> > With zero-day-attacks, it is no longer
> > possible that technical
> > details are published about the same time the patch is made
> > available.
>
> Again, you simply restate the wrong as if by saying it many times makes
> it true.
>
> This is not the case.
>
>
> > An industry accepted standard defining information release
> > steps and time
> > constrains is necessary here so that both vendors and
> > customers are given
> > enough time to make sure that they are secure before
> > technical details
> > (=instructions how to write malicious code) are released.
>
> Any industry accepted standard based on such inaccurate facts and
> fear... would be entirely detrimental to the security of the world's
> information systems.
>
> You would strive to keep this information only from people who need to
> have this information.
>
> The only ones left with this information would be hackers themselves.
> And, contrary to how these guys would portray it: I assure readers who
> are unfamiliar with these still unfortunately obscure practices that it
> is absolutely trivial for anyone that can write shell code (much less so
> some advanced ASM virus) to reverse engineer a fix.
>
> That is not crying wolf, that is the simple facts. Extremely simple
> facts.
>
> I can not be ever more ardent about this: the unbounded sprintf which
> was the basis for the ISS security hole was an extremely obvious
> security hole. The 'witty worm' was a really nasty worm. I whole
> heartedly agree there. But, anybody who wrote that could have easily
> tracked back the fix to this unbounded sprintf with or without
> disclosure. However, many administrators and security software vendors
> would not have had such an easy time of it.
>
> This whole debate is about smoke and mirrors.
>
> >
> > Martin Viktora
> >
> > --
> > NTBugtraq Editor's Note:
> >
> > Want to reply to the person who sent this message? This list
> > is configured such that just hitting reply is going to result
> > in the message coming to the list, not to the individual who
> > sent the message. This was done to help reduce the number of
> > Out of Office messages posters received. So if you want to
> > send a reply just to the poster, you'll have to copy their
> > email address out of the message and place it in your TO: field.
> > --
> >
>
> _______________________________________________
> Full-Disclosure - We believe in it.
> Charter: http://lists.netsys.com/full-disclosure-charter.html
>
>
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.netsys.com/full-disclosure-charter.html
Powered by blists - more mailing lists