lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 24 Jan 2011 10:18:13 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	juice@...gman.org
Cc:	"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
	"Loke, Chetan" <chetan.loke@...scout.com>,
	Jon Zhou <jon.zhou@...u.com>,
	Stephen Hemminger <shemminger@...tta.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: Using ethernet device as efficient small packet generator

Le lundi 24 janvier 2011 à 10:10 +0200, juice a écrit :
> >> you may also want to try reducing the tx descriptor ring count to 128
> >> using ethtool, and change the ethtool -C rx-usecs 20 setting, try
> >> 20,30,40,50,60
> >
> > So this could up my current network card to a little faster?
> > If I can reach 1.1Mpackets/s, thats about 560Mbits/s. At least it would
> > get me a little closet to what I am trying to achieve.
> >
> 
> I tried these tunings, and it turns out that I am able to get the best
> performance with pktgen when I set the options "ethtool -G eth1 tx 128"
> and "ethtool -C eth1 rx-usecs 10". Anything different will lower the TX
> performance.
> 

That (rx-usecs 10) makes no sense.
pktgen sends packets.
You should not receive packets ?

> Now I can get these rates:
> 
> root@...abralinux:/var/home/juice/pkt_test# cat /proc/net/pktgen/eth1
> Params: count 10000000  min_pkt_size: 60  max_pkt_size: 60
>      frags: 0  delay: 0  clone_skb: 1  ifname: eth1
>      flows: 0 flowlen: 0
>      queue_map_min: 0  queue_map_max: 0
>      dst_min: 10.10.11.2  dst_max:
>         src_min:   src_max:
>      src_mac: 00:1b:21:7c:e5:b1 dst_mac: 00:04:23:08:91:dc
>      udp_src_min: 9  udp_src_max: 9  udp_dst_min: 9  udp_dst_max: 9
>      src_mac_count: 0  dst_mac_count: 0
>      Flags:
> Current:
>      pkts-sofar: 10000000  errors: 0
>      started: 1205660106us  stopped: 1218005650us idle: 804us
>      seq_num: 10000001  cur_dst_mac_offset: 0  cur_src_mac_offset: 0
>      cur_saddr: 0x0  cur_daddr: 0x20b0a0a
>      cur_udp_dst: 9  cur_udp_src: 9
>      cur_queue_map: 0
>      flows: 0
> Result: OK: 12345544(c12344739+d804) nsec, 10000000 (60byte,0frags)
>   810008pps 388Mb/sec (388803840bps) errors: 0
> 
> AX4000:
>   Total bitrate:             414.629 MBits/s
>   Packet rate:               809824 packets/s
>   Bandwidth:                 41.46% GE
>   Average packet intereval:  1.23 us
> 
> This is a bit better than the previous maxim of 750064pps / 360Mb/sec
> that I was able to achieve without tuning parameters with ethtool, but
> still not near the 1.1Mpacks/s that shoud be doable with my card?
> 
> Are there other tunings or alternate driver that I could use to get the
> best performance out of the card? Basically what puzzles me is the fact
> that I can get a lot better performance using larger packets, so that
> suggests to me that the bottleneck cannot be the PCIe interface, as I can
> push enough data through it. Is there any way of doing larger transfers
> on the bus, like grouping many smaller packets together to avoid the
> problems caused by so many TX interrupts?
> 

What matters is not size of packets, but number of transactions (packets
per second)

You need a x4 or x8 connector to get more transactions per second, by an
order of magnitude, not 10% or 15% ;)

TX interrupts are already 'grouped', one for ~50 packets

ethtool -c :

tx-usecs: 72
tx-frames: 53
tx-usecs-irq: 0
tx-frames-irq: 53



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ