lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
From: security at brvenik.com (Jason)
Subject: Avoiding being a good admin - was DCOM RPC exploit  (dcom.c)

Lets get real with the numbers here. I think these are on the generous
side mind you.

Background statement.

** If you have systems worth protecting you hire people capable of
protecting them **

I think I will add a new one

** If you have 150,000 systems then you most likely have someone with the
knowledge to make it all work and that someone probably understands
staffing, budgets, planning, building a business case, presenting it... **

Paul Schmehl wrote:
> When you have 150,000 machines worldwide, having 1% of those unpatched
> (which is a 99% *success* rate) means you have 1500! vulnerable
> machines.  Most situations that I'm familiar with were in the tens - not
> even the hundreds - but it only took 10 or 15 machines to take down the
> entire network due to the nature of that worm.  10 or 15 boxes
> represents 1/100th of a percent of the total, yet that small number
> could completely destablize a network and cause untold hours of work for
> the admins and networking staff.

ok, when you have 150,000 machines worldwide, having 100% of them running
some version of SQL that is exposed to the network, any network, is just
plain _stupid_. I am referring to even a basic understanding of best
practice, with that you can knock out at least 10% of the vulnerable
system population... there went 15,000 systems.

Lets use these numbers anyway. Even if there is a 50% penetration rate of
systems with situations that require SQL and cannot be mitigated we have
75,000 systems.

Now assuming a 50% automatic patch penetration rate on those 75,000
systems you have 37,500 systems still exposed after 5 days of the release
of the patch.

6 months * 30 days = 1800 days to patch

1800 days - 6 months * 8 weekend days per month = 48 days.

add 2 days grace for holiday or whatever and you have 1750 days in which
to manually touch all of these systems and patch them.

37,500 systems / ~75 systems per site = 500 sites.

NOTE: systems per site should be much higher in a global org with 150k
systems and the ability to patch these without touching every one should
be fairly high, at least as high as 20%.

500 sites / ~6 sites per person =  ~83 people to patch.

6 sites * 75 systems = 450 systems per person

1750 days / 450 systems = ~4 days per system per person to patch.

On the generous side there are still 3.5 days per 4 day cycle to do normal
work.

6 months at .5 days every 4 days for 83 people is only 2.7 days
productivity per person lost. 2.7 days equals ~21.6 manhours per person.

If we wanted to maintain that productivity we could hire 4 contractors to
patch for 3 months and get 2688 manhours to apply to the problem. At the
outrageous rate of $150 an hour that is $403,200 to address the problem in
the time available. Add in another outrageous $600,000 for misc expenses
and travel and we are out $1,000,000

Given a conservative half a day downtime for only 100,000 of the more
likely 150,000 employees at a very conservative average burden of $10 per
hour you have spent $4,000,000 in productivity losses alone. This
completely ignores costs like lost data, lost confidence, work that has to
be redone...

> Now anybody who wants to tell me that a 0.01% failure rate in a patching
> program proves the admins are incompetent is simply ignorant of the
> issues.  I guess it's just impossible for people who don't actually run
> a large network to grasp the nature of the issues.

I think that failure rate in patching when you reasonably have 4 days per
system that needs to be touched proves incompetence.

It proves an inability to manage a network of that size.
It proves that project management is lacking.
It proves a lack of defined and accepted response plans.

It proves there is no one place that things break down and when several of
these places take the same attitude that the problem is insurmountable we
end up with slammer causing major problems.

** Oh my gawd, I think I need a bigger clue stick. **

Sorry to mix mails together, it is the same concept...

John Airey wrote:

> Imagine a company where a user is told by the IT department that such and
> such a computer can't be used. He then goes and buys it on his own credit
> card and claims it back on expenses (this happens more than you realise).
> Said IT department now has to support the machine that he was told he
> couldn't have, probably because someone higher up in the organisation says
> that it has to. This computer will probably consume a disproportionate
> amount of support time. The irony is that the purchaser will probably then
> tell you it was a bargain (yeah, right!).
>

Imagine the same company where the user has requirements, these
requirements are fed into the defined and accepted policy, the management
understands this policy, and the IT group understands that the world has
needs.

Now IT needs to take into account these needs and present a viable
solution in a timely manner. The problem here is that IT thinks that they
can hide behind security because they "do not have the time" to make these
needs a reality. The reality is they do not have time because they do not
want to solve the problem, this end up costing them more time and forcing
them to do things they do not want to do.

> The bottom line is that these days, the IT departments do not have enough
> power to enforce any radical suggestions. I'd be surprised if any
> organisation exists (outside of the military) that insists on knowing the
> MAC addresses of machines before they get connected to the network. (In our
> case we monitor MAC addresses instead as we can then spot network
problems).

:-)

I know of several, the largest one has 13,000 _client_ systems, nearly
5000 of them laptops.

Yes there are times when the fire drill has to be run to meet the needs of
some business critical project however these are the exception not the
rule.

Then you say, what happens when these laptops are roaming on the company
network or at a different office? This is pretty easy to handle actually,
roaming gets handled by designating conference rooms and temporary office
space as hostile networks and forcing having these connections use a VPN
to get to _anything_ they need, otherwise it is just like connecting to
your home cable modem only with better security for the clients and
tighter auditing. You hop onto a wireless segment you need to

* auth to the AP
* auth to the firewall
* VPN to use corp services

Failure to VPN only gets you authenticated web browsing.

> The truth is that all sysadmins are all involved in damage limitation,
which
> is why we subscribe to this list. We do our utmost to prevent damage, but
> recent history shows us just one user clicking on a dodgy email attachment
> can bring down major networks. In other cases not knowing what a firewall
> should and shouldn't do has caused other outages (even affecting
Microsoft).

Only because best practices were not implemented. Had that been done our
damage would have been significantly reduced and exposure nearly
completely eliminated.

Not knowing what a firewall should do... Sorry, no sympathy here.

>
> After all, if what has been suggested is true and has been implemented, why
> bother to subscribe to this list?
>

* new vulns can be evaluated against our environment and a
mitigation/resolution can be implemented
* things change, we need to stay up to date
* Someone can say I had that problem once, here is how it is mitigated
* we learn these things we did not know about that can be implemented today




Powered by blists - more mailing lists