lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090917212113.GC3691@ghostprotocols.net>
Date:	Thu, 17 Sep 2009 18:21:13 -0300
From:	Arnaldo Carvalho de Melo <acme@...hat.com>
To:	Nir Tzachar <nir.tzachar@...il.com>
Cc:	David Miller <davem@...emloft.net>,
	Linux Networking Development Mailing List 
	<netdev@...r.kernel.org>,
	Caitlin Bestler <caitlin.bestler@...il.com>,
	Chris Van Hoof <vanhoof@...hat.com>,
	Clark Williams <williams@...hat.com>,
	Neil Horman <nhorman@...driver.com>,
	Nivedita Singhvi <niv@...ibm.com>,
	Paul Moore <paul.moore@...com>,
	RĂ©mi Denis-Courmont 
	<remi.denis-courmont@...ia.com>,
	Steven Whitehouse <steve@...gwyn.com>,
	Ziv Ayalon <zivayalon@...il.com>
Subject: Re: [RFCv4 PATCH 2/2] net: Allow protocols to provide an
	unlocked_recvmsg socket method

Em Thu, Sep 17, 2009 at 05:09:19PM +0300, Nir Tzachar escreveu:
> Hello.
> 
> Below are some test results with the patch (only part 1, as I did not
> manage to apply part 2).

I forgot to mention that the patches were made against DaveM's
net-next-2.6 tree at:

git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6

If you have a linux-2.6 git tree, just do:

cd linux-2.6
git remote add net-next git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6
git branch -b net-next-recvmmsg net-next/master

And you should be able to apply the two patches cleanly.

> The test application is attached below, and works as follows:
> 
> I set out to measure the latency which can be saved by this patch, and
> the application is designed accordingly. It is composed of three
> parts: a producer, which time-stamps packets and sends them as fast as
> possible, a mirror, which receives messages and bounces them to a
> remote destination and finally, a consumer, which receives messages as
> fast as possible and measures latency and throughout.
> 
> Both the produce and consumer are executed on the same host and the
> mirror on a remote host. Both hosts are running linux 2.6.31 with v4
> of the patch (but, as I said before, only part 1, with the unlocked_*
> stuff). All processes are executed under SCHED_FIFO. Both hosts are

Here is the problem, the patch, as mentioned above, was made against
net-next-2.6.

I'll rework the 2nd patch so that you can test with both.

> connected by a switched 1G Ethernet network. The mirror is executed on
> a 8-core nahelem beast, and the producer and consumer on my desktop,
> which is a quad. /proc/cpuinfo and lspcis and .configs can be supplied
> if needed. Network cards are Intel Corporation 82566DM-2 Gigabit
> Network and Broadcom Corporation NetXtreme II BCM5709 Gigabit
> Ethernet.
> 
> The results (which follow below) clearly show the advantages of using
> recvmmsg over recvmsg both latency wise and throughput wise. The
> addition of a sendmmsg would also have a huge impact, IMO.

Yeah, there are even some smarts that can be done in the sendmmsg case,
like avoiding passing the same payload to multiple destinations, just
marking the mmsghdr size with zero that would thus mean "use the latest
non-zero sized payload".

> Receiving batches of 30 packets, each of 1024 bytes, results with no
> latency improvements, but with a ~55% throughput improvement, from 72
> megabytes per second to  111. Repeating the same test, but with
> batches of 3000, displays the same behaviour. The more interesting
> result (to me, at least :) is when using small packets. Sending
> packets of size 100 and receiving in batches of 30  gives 470 micro
> latency and 244669 packets per second. On the other hand, without
> recvmmsg we get 750 micro latency and 210818 packets per second. A
> huge improvement here.
> 
> I think that with a bit more tinkering we can even stretch these results a bit.

I guess so too, with luck I'll be able to test this over a 10 Gbit/s
link today, will use my and your test cases.

Thanks a lot!
 
- Arnaldo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ