[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1291109699.2904.11.camel@edumazet-laptop>
Date: Tue, 30 Nov 2010 10:34:59 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: Rui <wirelesser@...il.com>
Cc: netdev@...r.kernel.org
Subject: Re: multi bpf filter will impact performance?
Le mardi 30 novembre 2010 à 17:22 +0800, Rui a écrit :
> hi
>
> I did a test with an intel X520 10Gnic, HP DL380 G6, to see how the
> bpf filter will impact the performance.
>
> kernel .2.6.32 SLES11+SP1, original ixgbe driver
>
Could you try latest net-next-2.6, we optimized bpf a bit lately
commit 93aaae2e01e57483256b7da05c9a7ebd65ad4686
Author: Eric Dumazet <eric.dumazet@...il.com>
Date: Fri Nov 19 09:49:59 2010 -0800
filter: optimize sk_run_filter
> step 0:
> launch 4 tcpdump processes,each applied a filter to filter out some
> GTP-U packets. seen with 'tcpdump -d', the bpf code has about 100
> lines.
>
> #!/bin/sh
> PCAP_FRAMES=32000 ./tcpdump_MMAP -i eth4 'udp dst port 2152 and (
> (((ether[48:1]&0x07)>0) and
> (((ether[66:1]+ether[67:1]+ether[68:1]+ether[69:1]+ether[70:1]+ether[71:1]+ether[72:1]+ether[73:1])&0x03)==0))
> or (((ether[48:1]&0x07)==0) and
> (((ether[62:1]+ether[63:1]+ether[64:1]+ether[65:1]+ether[66:1]+ether[67:1]+ether[68:1]+ether[69:1])&0x03)==0))
> ) ' -w /dev/null -s 4096 2>f1.log &
> PCAP_FRAMES=32000 ./tcpdump_MMAP -i eth4 'udp dst port 2152 and (
> (((ether[48:1]&0x07)>0) and
> (((ether[66:1]+ether[67:1]+ether[68:1]+ether[69:1]+ether[70:1]+ether[71:1]+ether[72:1]+ether[73:1])&0x03)==1))
> or (((ether[48:1]&0x07)==0) and
> (((ether[62:1]+ether[63:1]+ether[64:1]+ether[65:1]+ether[66:1]+ether[67:1]+ether[68:1]+ether[69:1])&0x03)==1))
> ) ' -w /dev/null -s 4096 2>f2.log &
> PCAP_FRAMES=32000 ./tcpdump_MMAP -i eth4 'udp dst port 2152 and (
> (((ether[48:1]&0x07)>0) and
> (((ether[66:1]+ether[67:1]+ether[68:1]+ether[69:1]+ether[70:1]+ether[71:1]+ether[72:1]+ether[73:1])&0x03)==2))
> or (((ether[48:1]&0x07)==0) and
> (((ether[62:1]+ether[63:1]+ether[64:1]+ether[65:1]+ether[66:1]+ether[67:1]+ether[68:1]+ether[69:1])&0x03)==2))
> ) ' -w /dev/null -s 4096 2>f3.log &
> PCAP_FRAMES=32000 ./tcpdump_MMAP -i eth4 'udp dst port 2152 and (
> (((ether[48:1]&0x07)>0) and
> (((ether[66:1]+ether[67:1]+ether[68:1]+ether[69:1]+ether[70:1]+ether[71:1]+ether[72:1]+ether[73:1])&0x03)==3))
> or (((ether[48:1]&0x07)==0) and
> (((ether[62:1]+ether[63:1]+ether[64:1]+ether[65:1]+ether[66:1]+ether[67:1]+ether[68:1]+ether[69:1])&0x03)==3))
> ) ' -w /dev/null -s 4096 2>f4.log &
>
>
Hmm, do you receive trafic on several queues of your card ?
Do you have 4 cpus running ?
grep eth /proc/interrupts
> step1:
> use stress test equipment to generate traffic >1.2Gbps
>
>
> step2:
> type 'ifconfig eth4'
> saw many packets dropped
>
> step3:
> type 'sar -n DEV 1', the incoming throughput limited at 800Mbps
>
>
> my questions:
>
> Q1. the bpf filter is run one by one? will only one filter be executed
> for one sock? (so that the tcpdump corresponding kernel part will run
> filter in parallel?)
>
bpf filter is run on behalf of the softirq processing.
> Q2. performance is bad? any idea to improve it?
>
multiqueue card : split each IRQ on a separate cpu.
If not multiqueue card : use RPS on a recent kernel to split the load on
several cpus.
Use a filter against the queue, instead of doing a hash code yourself in
the bpf. (code added in commit d19742fb (linux-2.6.33)
(you need to tweak libpcap to use SKF_AD_QUEUE instruction)
commit d19742fb1c68e6db83b76e06dea5a374c99e104f
Author: Eric Dumazet <eric.dumazet@...il.com>
Date: Tue Oct 20 01:06:22 2009 -0700
filter: Add SKF_AD_QUEUE instruction
It can help being able to filter packets on their queue_mapping.
If filter performance is not good, we could add a "numqueue" field
in struct packet_type, so that netif_nit_deliver() and other functions
can directly ignore packets with not expected queue number.
Lets experiment this simple filter extension first.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists