lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
From: pingouin at rhapsodyk.net (Simon Marechal) Subject: interesting? On Sat, Feb 01, 2003 at 05:03:40PM +0100, Simon Richter wrote: > > Using a random distribution is the best no-brainer way to make sure > > having 500 worms will produce a 500 times wider coverage. > > No, with a truly random pattern they will step on each other's toes. Of course. But a truly random pattern, 1 worm _should_ step on his own toes one time or another. what i meant is that having 500 worms with a random target selector makes it feel like 1 worm that tries infecting other hosts 500times faster. > > PS:what you're describing looks like a pseudo random generator ... doesn't > > look like a structured approach. > > It may very well be one, or just luck. Point is, you can optimize PRNGs > in a specific direction, like number of cycles contained, or you can add > external elements like the time and make a function that's not bijective > (which is necessary for a worm) etc. A worm is more effective if less > bits depend on the time and more on the host we're on, as this > distributes the attack better. I don't find that obvious ... if hosts are close, time, if precise enough, might be a much better indicator. On intel cpus, there's a register that's increased for every cycle. If you let a good hash function process it to build a seed, you might have a really good seed generator. Even so, a good PRNG should behave very differently even with very close seeds. > On the other hand, if all bits depend on > the current host, you have a PRNG with only one cycle that gets broken > by the first host not running SQL Server. I don't understand that ... > You need to find a good > balance, respecting the percentage and distribution of hosts running > vulnerable software and of course the fact that the system clock > proceeds very slow and thus you can use only a few bits of it (but > basically, these bits together with maybe, a counter, make up the > redundancy you need to infect an entire network even if some hosts are > not vulnerable). And this is very obscure to me too :) What do you mean? That there is a way to coordinate this job by using a source of entropy?
Powered by blists - more mailing lists