lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 14 Dec 2010 17:24:15 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Jesper Dangaard Brouer <hawk@...x.dk>
Cc:	Stephen Hemminger <shemminger@...tta.com>,
	netfilter-devel <netfilter-devel@...r.kernel.org>,
	netdev <netdev@...r.kernel.org>
Subject: Re: Possible regression: Packet drops during iptables calls

Le mardi 14 décembre 2010 à 17:09 +0100, Jesper Dangaard Brouer a
écrit :
> On Tue, 2010-12-14 at 16:31 +0100, Eric Dumazet wrote:
> > Le mardi 14 décembre 2010 à 15:46 +0100, Jesper Dangaard Brouer a
> > écrit :
> > > I'm experiencing RX packet drops during call to iptables, on my
> > > production servers.
> > > 
> > > Further investigations showed, that its only the CPU executing the
> > > iptables command that experience packet drops!?  Thus, a quick fix was
> > > to force the iptables command to run on one of the idle CPUs (This can
> > > be achieved with the "taskset" command).
> > > 
> > > I have a 2x Xeon 5550 CPU system, thus 16 CPUs (with HT enabled).  We
> > > only use 8 CPUs due to a multiqueue limitation of 8 queues in the
> > > 1Gbit/s NICs (82576 chips).  CPUs 0 to 7 is assigned for packet
> > > processing via smp_affinity.
> > > 
> > > Can someone explain why the packet drops only occur on the CPU
> > > executing the iptables command?
> > > 
> > 
> > It blocks BH
> > 
> > take a look at commits :
> > 
> > 24b36f0193467fa727b85b4c004016a8dae999b9
> > netfilter: {ip,ip6,arp}_tables: dont block bottom half more than
> > necessary 
> > 
> > 001389b9581c13fe5fc357a0f89234f85af4215d
> > netfilter: {ip,ip6,arp}_tables: avoid lockdep false positive
> > 
> > for attempts to let BH fly ...
> > 
> > Unfortunately, lockdep rules :(
> 
> Is the lockdep check a false positive?

Yes its a false positive.

> Could I run with 24b36f0193 in production, to fix my problem?
> 

Yes, but you could also run a kernel with both commits:

We now block BH for each cpu we are "summing", instead of blocking BH
for the whole 16 possible cpus summation. (so BH should be blocked for
smaller amount of time)

> I forgot to mention I run kernel 2.6.35.8-comx01+ (based on Greg's stable kernel tree).
> 
> $ git describe --contains 24b36f019346
> v2.6.36-rc1~571^2~46^2~7
> $ git describe --contains 001389b9581c1
> v2.6.36-rc3~2^2~42
> 
> 
> > > What can we do to solve this issue?
> 
> Any ideas how we can proceed?
> 
> Looking closer at the two combined code change, I see that the code path
> has been improved (a bit), as the local BH is only disabled inside the
> for_each_possible_cpu(cpu).  Before local_bh was disabled for the hole
> function.  Guess I need to reproduce this in my testlab.
> 

Yes, so current kernel is a bit better.

Note that even with the 'false positive' problem, we had to blocks BH
for the current cpu sum, so the max BH latency is probably the same with
or without 001389b9581c13fe5



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ