lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dc718edc04122511565c5cbcdf@mail.gmail.com>
From: kkadow at gmail.com (Kevin)
Subject: OpenSSH is a good choice?

On Fri, 24 Dec 2004 16:00:45 -0600 (CST), Ron DuFresne
<dufresne@...ternet.com> wrote:
> It might depend upon how the algorithim is implimented, say, search for
> easy to find vuln systems with stadard port open, till perhaps 10 or 100
> or some given number are found and infected, then go back through the
> non-vuln hosts and search those specifically for non-standard ports.
> Insures spread of the worm and quick infection rate, and then allows it to
> retarget 'hidden' systems.  Seems to me this would merely be a change to
> infection code similiar to those  wrms that had in them coded a
> date and time to attack a site.

One consideration -- sysadmins who are bright enough to configure
services onto non-standard ports are likely also bright enough to
patch their systems, install IDS and HIPS, and such hosts are in
general less likely to be exploitable than default configurations.

I'm not sure that a routine to find hidden, vulnerable services would
add much value to an automated "flash worm".   This approach makes
sense for a human attacker trying to penetrate a specific site or
class of target, but for a "flash worm", wouldn't it make more sense
to put the work towards finding more easy targets?

What does it benefit a worm or the worm's author to compromise 99% of
vulnerable systems rather than a mere 85% of the vulnerable
population?

Additionally, port scanning raises the profile of the source, both on
the network and at the target.   Whether just blasting out the exploit
code or doing banner scanning, the worm will need to do a full TCP
session to each potential target IP:Port.  This is not only slow, but
also very "noisy", causing unusual events to be logged by listening
daemons on the target system.

The only time I've ever been reprimanded for running (authorized) nmap
scans against non-hardened solaris systems was when I used the '-sV'
option and freaked out a (non security conscious) sysadmin due to the
large volume of timeout and protocol errors logged by rpcbind and
other default TCP listeners.


> Seriously, why do folks think sshd should be open for the world to pound
> upon, no matter which port it's assinged to run on?

When you cannot know the source IP in advance, *something* must serve
as the gatekeeper for access to network services.


>  It provides an encrypted channel into the network.  And channel in,
> especially encrypted channels, should be guarded and allow only those
> that require access to get access.

Many systems have a business need to allow customers to connect in
from arbitrary source addresses -- vendor support for maintenance,
customers uploading content, etc.   There is an unavoidable
requirement to have *some* channel into the system, and it's tough
enough for web hosting providers to push customers to migrate off of
cleartext password protocols like telnet and FTP, now we need to
convince the customer to use public keys and strong authentication
tokens?

IPSEC might make sense for employee inbound sessions,   But for
"customers" of web hosting and the like, ssh itself is already the
primary gatekeeper -- there isn't any other (easy) check to implement
before letting an unknown source talk to the ssh listener.


Kevin Kadow

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ