lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
From: DaveHowe at cmn.sharp-uk.co.uk (Dave Howe)
Subject: idea (quite a bit off-topic, but....)

D B wrote:
> what the port hopping tries to achieve is making it
> even more difficult to sniff because one cant just
> sniff a certain port.... with a random range u have to
> suck in garbage data and this increases the time it
> takes to reassemble if it is even possible
No, it is functionally equivilent to just opening a single connection.
assuming tcp, all you would need to do under your protocol would be to
note the initial (ssl) control connection, then capture *all* traffic
between the two nodes that just opened a "spread spectrum" connection.
Possibly more interesting would be a spoofed source variant; you connect
and assert a file size and name, plus a random "address"; you obtain a
unique id and symmetric key.
then, break your unencrypted file up into 'n' blocks, number them with a
prefix, and postfix the unique id.
at random intervals, pick a random block from your pool, generate a random
new address, and prefix the block with that. now encrypt the whole thing
with the symmetric key you got from the server, and send it UDP to a fixed
port on the server (or even ICMP) - but *from* a spoofed IP address
matching the one you asserted to the server earlier
More than one person can have the same expected source ip - the server
will decrypt the block with each symmetric key it has for that source ip
in turn, and if none give the matching unique id, discard the packet.
(collisions would be fairly low anyhow, even in a DOS attack; there are
after all 2^32 possible ip addresses)
After a random number of packets, reconnect with ssl, assert the unique id
*and* filename, and the reply will be the last block it received and
successfully processed (if a packet goes missing, that will allow you to
resend it or them)
Note the ssl connections there would be backtraceable, even if the data
isn't - and sending a big flood of UDP/ICMP packets (even via relays)
interwoven with direct SSL connnections would look pretty suspious if they
monitored *Your* connection. Of course you could generate cover traffic
(send vaguely random blocks to a file asserted as "/dev/null") but it
would be nicer if the SSL link were the one with a high degree of cover
traffic.
consider instead of a dedicated server, running the SSL channel as a
simple php or perl script on a functional webserver - then host a free
https webmail service; the control channel traffic would be lost in the
noise of several thousand web freeloaders all checking their webmail
accounts, and of course uploaders would have valid email accounts there
too (as checking them would be their cover traffic)
add a better reason for UDP packets (like some sort of IM client) and you
have a functional "service to the community" server, whose covert traffic
is hard to distinguish from (and lost in) legitimate traffic to the same
ports.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ