lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 9 Apr 2011 17:25:48 -0500
From:	Robert Hailey <lkml@...dok.com>
To:	Valdis.Kletnieks@...edu
Cc:	linux-kernel@...r.kernel.org
Subject: Re: A long overdue fork-bomb defense ?


On 2011/04/09 (Apr), at 3:12 PM, Valdis.Kletnieks@...edu wrote:

> On Fri, 08 Apr 2011 15:47:13 CDT, Robert Hailey said:
>
>> 		log("fork_count generation");
>> 		divide_all_process_fork_counts_by_two();
>
> This will involve painful locking on large systems with lots of  
> procs running.

This logic (and the related painful locks) would be triggered only  
once in a small fractional proportion to the number of forks of the  
single greatest forker. But it is a solid observation, that if such a  
patch was in place there would be an overhead to it's use; I imagine  
it would take a considerable amount of time for a long running system  
to wrap it's fork counts.

Is there a better way to handle the integer overflows?

>
>
>> 		for ( p : process_table) {
>
> Ditto.

Thankfully, this logic would only be triggered when the process table  
is full. At that point I doubt anyone would miss the compute time of  
even the most painful lock :)

>
>> 	if (fork_alert_level) {
>> 		if (fork_count >= fork_alert_level) {
>> 			signal(KILL, proc) && log('killed ...');
>> 			//don't: fork_alert_time=now();
>> 			return/dispatch?;
>> 		}
>> 		if (now()-fork_alert_time>10 seconds?) {
>> 			fork_alert_level=0; //Relax
>> 		}
>> 	}
>
> A smart attacker can probably use this to  game the fork rate to fly  
> just under
> the wire, while still piling up lots of processes, *and* adding  
> extra overhead
> as it goes. If the rate limit is 5000 forks every 10 seconds, it can  
> do 4500
> every 10 seconds, and in a few minutes the poor scaling sections  
> will eat your
> system alive.

Perhaps there is a misunderstanding.... Although this logic is  
*sensitive* to forking rate, this is not directly acting on (or  
measuring) a forking rate. It is simply providing a metric by which  
processes can be compared (number of forks in self and ancestors), and  
providing something to do if we find that we are out of process table  
space (the limited resource in question). Of course... if the memory  
ceiling is reached first (fork/malloc), then that is a concern of the  
OOM-killer (a separate but related discussion).

Presuming for a moment that it works, I think the worst case is  
actually a single (perhaps compromised) process spawning child fork  
bombs. For that matter it could be a bash shell with the user setting  
them off. In that case it might *never* cause enough forking it to get  
itself automatically killed, but the system would still be [somewhat?]  
responsive through the attack b/c it no longer denies a legitimate  
fork, i.e. logging in & using a shell work, even while the process  
table is *FULL* of active fork bombs.

Even if a fork bomb is downgraded from "fatal" to "makes things darn  
slow", it's worth considering, no?

--
Robert Hailey


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ