lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.BSF.4.21.0301312217010.40380-100000@vapour.net>
From: batsy at vapour.net (batz)
Subject: interesting?


According to the analysis posted to NANOG by a number of
researchers (http://www.caida.org/analysis/security/sapphire/), 
It infected the majority of hosts within the first 10 minutes. 

>From the introduction:

"The Sapphire Worm was the fastest computer worm in history. 
As it began spreading throughout the Internet, it doubled in 
size every 8.5 seconds. It infected more than 90 percent of 
vulnerable hosts within 10 minutes. " 

The paper goes on a few paragraphs down to talk about how both Code Red
and Sapphire used a strategy based upon "random scanning", a feature
of which is that they spread exponentially rapidly.  They call
this the Random Constant Spread model (RCS). This is apparently
also a "classic logistic form". 

So let me get this straight. You release a peice of _randomly_ self 
propagating code into a networked system, and despite its randomness
(or limited randomness in the case of sapphire) it still manages
to cover %90 of possible targets in its first 10 minutes of existence. 

I can't be the only one who saw this and wondered whether it 
was a feature of networks, as logical entities, that allowed 
for something that randomly picked targets to cover so much
ground so quickly. 

This seems important is because it shows that a high rate
of saturation can be achieved among network nodes as
effectively (if not more so) using random distribution, as by 
using a structured or hierarchical distribution strategy. 

An example of a structured strategy would be, choosing 
aggregation points and going ISP by ISP, subnet by subnet, 
or contiguously host by host. 

I think this is significant as it could offer some 
insight into whether it is more efficient or economical (fewer
iterations?) to distribute mobile or replicating information 
into a network in a controlled vs. a random way. To me, it's 
eerily similar to the question of how to distribute 
vulnerability information most effectively in a system 
of interconnected administrators. 

Randomly seems to have worked quite well this time around. 

Cheers, 


-- 
batz


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ