lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAD6jFUTL9-Uyf-0YPwB7Fd5V71NvJh4ameZMF2GCQjDXh_DVzw@mail.gmail.com>
Date:	Thu, 25 Oct 2012 20:04:32 +0200
From:	Daniel Borkmann <danborkmann@...earbox.net>
To:	Ajith Adapa <adapa.ajith@...il.com>
Cc:	netdev@...r.kernel.org
Subject: Re: Regarding bottlenecks for high speed packet generation

On Thu, Oct 25, 2012 at 7:46 PM, Ajith Adapa <adapa.ajith@...il.com> wrote:
> I am trying out a sample application written based on packet_mmap to
> generate packets at line rates. I am using 3.4.10 kernel and a gigabit
> nic card.
> I have found some strange issues as mentioned below.
>
> When I am transmitting packets sized 1500 bytes the socket buffers are
> easily filled up eventhough if I increase the value of wmem_default
> and wmem_max.
> The memory I have allotted can fit around 14k packets of 1500 bytes
> size. Is the NIC card not able to transmit them ?? How can I check it
> ??
> I have even increased the Hardware tx queue size of NIC card from 256
> to 4096 using ethtool.
>
> Does traffic control causing any issue as I read it wont allow traffic bursts ?

Actually in my experience not. On Gigabit Ethernet (depending on your
machine) you should be able to generate about 80k pps with 1500 byte
(e.g. with netsniff-ng's trafgen) having the default tc discipline.

> Is there any way to find the major bottlenecks that would really cause
> problems in high-speed packet generation.

Have you looked how much pps the kernel space pktgen can generate on
your machine with 1500 byte packets?

What tool do you use to measure this reported rate? Do you analyze
driver statistics from e.g. procfs, or do you use a libpcap-based tool
to measure the stuff (which you shouldn't since it could falsify your
stats)? How was your measurement setup?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ