[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <405FCABB.1040900@bastille-linux.org>
From: jay at bastille-linux.org (Jay Beale)
Subject: When do exploits get used?
Luke Scharf wrote:
>On Mon, 2004-03-22 at 17:13, Jay Beale wrote:
>
>
>>You may find this discussion academic. But the exploit writers and the
>>worm writers are getting faster. And that's what should scare us into
>>moving beyond patches. That's what should get us moving to better
>>network and host configurations. That's what should get us to evaluate
>>patching as, at most, the easy, but most critical, 50%.
>>
>>
>
>I would say that we could all agree that not patching is a recipe for
>disaster -- and that it's very easy to keep up to date.
>
>
Yes. This is obvious. If we don't patch, we're just left vulnerable.
The windows of vulnerability end only at O/S upgrades!
>But, my 90% figure comes from the accidental plugging of unpatched
>Windows machines into the open network. Every time I do that, the
>machine is running msblast in a few minutes. And as near as I tell,
>it's not my machines that are doing it (except for that one unpatched
>machine that I spend an hour rebuilding)...
>
>
Well, I still worry that you've oversimplified things with the 90%
figure. In trying to convince people this way that they should deploy
patches quickly, you're setting up the expectation that there won't be
any more compromises when everyone patches.
The purpose of my previous explanation was to show that you're still a
slave to timing -- you may not be able to patch enough, either because
you've got a previously unknown vuln (0-day), because your vendor isn't
fast enough, or because the attacker/worm arrives and begins exploiting
systems too quickly for your regular periodic patching practice. In the
latter case, you might patch every day, but the worm could hit systems 6
hours after your last patch cycle, 18 hours before you'll be deploying
the patch against the worm's vuln.
What I'm trying to argue here is that we should be patching, but that we
should also begin locking down hosts. NSA's Information Assurance
Directorate found that you could use well-known best practices to remove
or mitigate over 90% of the vulnerabilities in Windows 2000. Kerry
Steele, working on behalf of the Center for Internet Security, found a
similar over-90% mitigation rate on Red Hat Linux.
The critical thing to understand is that you tweak the security
setttings on the system _before_ the vulnerabilities are discovered and
get that success rate. It's not precognition -- you're simply
configuring the machine for better security, based on an understanding
of what the machine's function is.
It's very effective. The techniques have been well understood for
years. And very few organizations make this a priority for their sysadmins.
BTW, I'm not just arguing for patching and hardening, I'm also arguing
that we should start considering better network architecture and access
control. Internal router rulesets or firewalls could go a long way
toward slowing the propagation of a worm on the LAN. Worms spread
throughout an organization amazingly quickly in large part because we're
still in the "crunchy outer shell, chewy center" model of firewall
deployment. This isn't necessary -- engineering and accounting
workstations rarely need to communicate with each other with LAN-based
protocols. They tend to interact through central servers. The internal
router/firewall policy should reflect this. But that's a whole new can
of worms.
- Jay
Powered by blists - more mailing lists