lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1275340896.2478.26.camel@edumazet-laptop>
Date:	Mon, 31 May 2010 23:21:36 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	hawk@...x.dk
Cc:	Jesper Dangaard Brouer <hawk@...u.dk>, paulmck@...ux.vnet.ibm.com,
	Patrick McHardy <kaber@...sh.net>,
	Changli Gao <xiaosuo@...il.com>,
	Linux Kernel Network Hackers <netdev@...r.kernel.org>,
	Netfilter Developers <netfilter-devel@...r.kernel.org>
Subject: Re: DDoS attack causing bad effect on conntrack searches

Le lundi 26 avril 2010 à 16:36 +0200, Jesper Dangaard Brouer a écrit :
> On Sat, 2010-04-24 at 22:11 +0200, Eric Dumazet wrote:
> >  
> > > Monday or Tuesdag I'll do a test setup with some old HP380 G4 machines to 
> > > see if I can reproduce the DDoS attack senario.  And see if I can get 
> > > it into to lookup loop.
> > 
> > Theorically a loop is very unlikely, given a single retry is very
> > unlikly too.
> > 
> > Unless a cpu gets in its cache a corrupted value of a 'next' pointer.
> > 
> ...
> >
> > With same hash bucket size (300.032) and max conntracks (800.000), and
> > after more than 10 hours of test, not a single lookup was restarted
> > because of a nulls with wrong value.
> 
> So fare, I have to agree with you.  I have now tested on the same type
> of hardware (although running a 64-bit kernel, and off net-next-2.6),
> and the result is, the same as yours, I don't see a any restarts of the
> loop.  The test systems differs a bit, as it has two physical CPU and 2M
> cache (and annoyingly the system insists on using HPET as clocksource).
> 
> Guess, the only explanation would be bad/sub-optimal hash distribution.
> With 40 kpps and 700.000 'searches' per second, the hash bucket list
> length "only" need to be 17.5 elements on average, where optimum is 3.
> With my pktgen DoS test, where I tried to reproduce the DoS attack, only
> see a screw of 6 elements on average.
> 
> 
> > I can setup a test on a 16 cpu machine, multiqueue card too.
> 
> Don't think that is necessary.  My theory was it was possible on slower
> single queue NIC, where one CPU is 100% busy in the conntrack search,
> and the other CPUs delete the entries (due to early drop and
> call_rcu()).  But guess that note the case, and RCU works perfectly ;-)
> 
> > Hmm, I forgot to say I am using net-next-2.6, not your kernel version...
> 
> I also did this test using net-next-2.6, perhaps I should try the
> version I use in production...
> 
> 


I had a look at current conntrack and found the 'unconfirmed' list was
maybe a candidate for a potential blackhole.

That is, if a reader happens to hit an entry that is moved from regular
hash table slot 'hash' to unconfirmed list, reader might scan whole
unconfirmed list to find out he is not anymore on the wanted hash chain.

Problem is this unconfirmed list might be very very long in case of
DDOS. It's really not designed to be scanned during a lookup.

So I guess we should stop early if we find an unconfirmed entry ?



diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h
index bde095f..0573641 100644
--- a/include/net/netfilter/nf_conntrack.h
+++ b/include/net/netfilter/nf_conntrack.h
@@ -298,8 +298,10 @@ extern int nf_conntrack_set_hashsize(const char *val, struct kernel_param *kp);
 extern unsigned int nf_conntrack_htable_size;
 extern unsigned int nf_conntrack_max;
 
-#define NF_CT_STAT_INC(net, count)	\
+#define NF_CT_STAT_INC(net, count)		\
 	__this_cpu_inc((net)->ct.stat->count)
+#define NF_CT_STAT_ADD(net, count, value)	\
+	__this_cpu_add((net)->ct.stat->count, value)
 #define NF_CT_STAT_INC_ATOMIC(net, count)		\
 do {							\
 	local_bh_disable();				\
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index eeeb8bc..e96d999 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -299,6 +299,7 @@ __nf_conntrack_find(struct net *net, u16 zone,
 	struct nf_conntrack_tuple_hash *h;
 	struct hlist_nulls_node *n;
 	unsigned int hash = hash_conntrack(net, zone, tuple);
+	unsigned int cnt = 0;
 
 	/* Disable BHs the entire time since we normally need to disable them
 	 * at least once for the stats anyway.
@@ -309,10 +310,19 @@ begin:
 		if (nf_ct_tuple_equal(tuple, &h->tuple) &&
 		    nf_ct_zone(nf_ct_tuplehash_to_ctrack(h)) == zone) {
 			NF_CT_STAT_INC(net, found);
+			NF_CT_STAT_ADD(net, searched, cnt);
 			local_bh_enable();
 			return h;
 		}
-		NF_CT_STAT_INC(net, searched);
+		/*
+		 * If we find an unconfirmed entry, restart the lookup to
+		 * avoid scanning whole unconfirmed list
+		 */
+		if (unlikely(++cnt > 8 &&
+			     !nf_ct_is_confirmed(nf_ct_tuplehash_to_ctrack(h)))) {
+			NF_CT_STAT_INC(net, search_restart);
+			goto begin;
+		}
 	}
 	/*
 	 * if the nulls value we got at the end of this lookup is
@@ -323,6 +333,7 @@ begin:
 		NF_CT_STAT_INC(net, search_restart);
 		goto begin;
 	}
+	NF_CT_STAT_ADD(net, searched, cnt);
 	local_bh_enable();
 
 	return NULL;


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ