[<prev] [next>] [day] [month] [year] [list]
Message-ID: <200511032220.jA3MKq2s020023@linus.mitre.org>
Date: Thu, 3 Nov 2005 17:20:52 -0500 (EST)
From: "Steven M. Christey" <coley@...re.org>
To: bugtraq@...urityfocus.com
Subject: On Interpretation Conflict Vulnerabilities
In a post "SEC-CONSULT-SA-20051021-0: Yahoo/MSIE XSS", Bernhard
Mueller said:
>SEC-Consult believes that input-validation thru blacklists can just be
>a temporary solution to problems like this. From our point of view
>there are many other applications vulnerable to this special type of
>problem where vulnerabilities of clients and servers can be combined.
>
>...
>
> Excerpt from HTML-mails:
>
> ========================================================================
> SCRIPT-TAG:
> --cut here---
> <h1>hello</h1><s[META-Char]cript>alert("i have you
> now")</s[META-Char]cript></br>rrrrrrxxxxx<br>
> ---cut here---
>
>...
>
>Recommended hotfixes for webmail-users
>---------------
>
>Do not use MS Internet-Explorer.
This falls under a class of vulnerabilities that I refer to as either
"interpretation conflicts" or "multiple interpretation errors"
depending on what time it is, though I'm leaning toward interpretation
conflicts.
These types of problems frequently occur with products that serve as
intermediaries, proxies, or monitors between other entities - such as
antivirus products, web proxies, sniffers, IDSes, etc.
They are a special type of interaction error in which one product (in
this case, Yahoo email) performs reasonable actions but does not
properly model all behaviors of another product that it's interacting
with (in this case, Internet Explorer ignoring unusual characters
right in the middle of HTML tags). The intermediary/proxy/monitor
then becomes a conduit for exploitation due to the end product's
unexpected behavior.
Some examples:
- Ptacek/Newsham's famous IDS evasion paper used interpretation
conflicts to prevent IDSes from properly reconstructing network
traffic as it would be processed by end systems.
- Many of the Anti-Virus evasion techniques you see these days
involve interpretation conflicts - e.g. the magic byte problem,
multiple conent-type headers, and so on
- The recent problem with phpBB and others, because they did not
account for how Internet Explorer renders HTML in corrupted .GIF
images, is another example of an interpretation conflict.
- Many unusual XSS manipulations are due to interpretation conflicts
in which one web browser supports a non-standard feature that
others do not. Netscape had an unusual construct - something like
"&{abc}" - that even a whitelist might not catch.
In my opinion, the "responsibility" for avoiding interpretation
conflicts falls with:
- the intermediaries/proxies/monitors if the problem involves an
incomplete model of *normal*, reasonable, and/or standards
compliant behavior
- the end products, if the end product behavior does not conform
with established standards
- the standards or protocols, if they are defined in ways that are
too vague or flexible
However, if the end products already exhibit unexpected behaviors, the
reality is that intermediaries are forced into anticipating all
possible interpretation conflicts, and blamed if they do not.
Mueller also said:
> Do not use blacklists on tags and attributes. Whitelist
> special/meta-characters.
Whitelists, while better than blacklists, can still be too permissive.
This is especially the case with interpretation conflicts.
As I've suggested previously, Jon Postel's wisdom "Be liberal in what
you accept, and conservative in what you send" has been a boon to the
growth of networking, but blind adherence to this wisdom is a
dangerous enabler of subtle vulnerabilities that will prevent us from
ever having full control over the data that crosses our networks.
- Steve
Powered by blists - more mailing lists