[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <44274C5C.4080108@ddplus.net>
Date: Mon Mar 27 04:04:23 2006
From: dinis at ddplus.net (Dinis Cruz)
Subject: Re: [Owasp-dotnet] RE: 4 Questions: Latest IE
vulnerability, Firefox vs IE security, User vs Admin risk profile,
and browsers coded in 100% Managed Verifiable code
Hi Jeff, comments inline
Jeff Williams wrote:
> Great topics.
>
> I'm a huge fan of sandboxes, but Dinis is right, the market hasn't really
> gotten there yet. No question that it would help if it was possible to run
> complex software like a browser inside a sandbox that restricted its ability
> to do bad things, even if there are vulnerabilities (or worse -- malicious
> code) in them.
Absolutely, and do you see any other alternative? (or we should just
continue to TRUST every bit of code that is executed in our computers?
and TRUST every single developer/entity that had access to that code
during its development and deployment?)
> I'm terrified about the epidemic use of libraries that are
> just downloaded from wherever (in both client and server applications). All
> that code can do *whatever* it wants in your environments folks!
>
>
Yes they can, and one of my original questions was 'When considering the
assets, is there REALLY any major differences between running code as
normal user versus as an administrator?"
> Sandboxes are finally making some headway. Most of the Java application
> servers (Tomcat included) now run with their sandbox enabled (albeit with a
> weak policy). And I think the Java Web Start system also has the sandbox
> enabled. So maybe we're making progress.
>
True, but are these really secure sandboxes?
I am not a Java expert so I can't give you specific examples, but on the
.Net Framework a Partially Trusted 'Sandbox' which contains an
UnamanagedCode, MemberAccess Reflection or SkipVerification Permission,
should not be called a 'Sandbox' since it can be easily compromised.
> But, if you've ever tried to configure the Java security policy file, use
> JAAS, or implement the SecurityManager interface, you know that it's *way*
> too hard to implement a tight policy this way.
And .Net has exactly the same problem. It is super complex to create a
.Net application that can be executed in a secure Partially Trusted Sandbox.
> You end up granting all
> kinds of privileges because it's too difficult to do it right.
And the new VS2005 makes this allocation of privileges very easy: "Mr.
developer, your application crashed because it didn't have the required
permissions, Do you want to add these permissions, Yes No? ....
(developer clicks yes) ... "You are adding the permission
UnamanagedCodePermission, do you sure, Yes No? ... (developer clicks yes
(with support from application architect and confident that all
competitor Applications require similar permissions))"
> And only the
> developer of the software could reasonably attempt it, which is backwards,
> because it's the *user* who really needs it right.
Yes, it is the user's responsibility (i.e. its IT Security and Server
Admin staff) to define the secure environment (i.e the Sandbox) that 3rd
party or internal-developed applications are allocated inside their data
center,
> It's possible that sandboxes are going the way of multilevel security (MLS).
> A sort of ivory tower idea that's too complex to implement or use.
I don't agree that the problem is too complex. What we have today is
very complex architectures / systems with too many interconnections.
Simplify the lot, get enough resources with the correct focus involved,
are you will see that it is doable.
> But it
> seems like a really good idea that we should try to make practical. But even
> if they do start getting used, we can't just give up on getting software
> developers to produce secure code. There will always be security problems
> that sandboxes designed for the platform cannot help with.
>
Of course, I am not saying that developers should produce insecure code,
I am the first to defend that developers must have a firm and solid
understanding of the tools and technologies that they use, and also as
important, the security implications of their code.
> I'm with Dinis that the only way to get people to care is to fix the
> externalities in the software market and put the burden on those who can
> most easily avoid the costs -- the people who build the software. Maybe then
> the business case will be more clear.
>
Yes, but the key here is not with money (since that would also kill
large chunks of the Open Source world).
One of the solutions that I like, is the situation where all software
companies have (by law) to disclose information about the
vulnerabilities that they are aware of (look at the Eeye model of
disclosing information about 'reported but unpatched vulnerabilities').
Basically, give the user data (as in information) that he can digest and
understand, and you will see the user(s) making the correct decision(s).
> (Your last point about non-verified MSIL is terrifying. I can't think of any
> reason why you would want to turn off verification -- except perhaps startup
> speed. But that's a terrible tradeoff.)
>
See my previous post (on this same thread) about this issue, but I think
that .Net is not alone in skipping verification for locally executed code :)
Dinis Cruz
Owasp .Net Project
www.owasp.net
Powered by blists - more mailing lists