[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071207001803.GA9330@cs.unibo.it>
Date: Fri, 7 Dec 2007 01:18:03 +0100
From: renzo@...unibo.it (Renzo Davoli)
To: Andi Kleen <andi@...stfloor.org>
Cc: Chris Friesen <cfriesen@...tel.com>, linux-kernel@...r.kernel.org
Subject: Re: New Address Family: Inter Process Networking (IPN)
I have done some raw tests.
(you can read the code here: http://www.cs.unibo.it/~renzo/rawperftest/)
The programs are quite simple. The sender sends "Hello World" as fast as it
can, while the receiver prints time() for each 1 million message
received.
On my laptop, tests on 20000000 "Hello World" packets,
One receiver:
multicast 244,000 msg/sec
IPN 333,000 msg/sec (36% faster)
Two receivers:
multicast 174,000 msg/sec
IPN 250,000 msg/sec (43% faster)
Apart from this, how could I implement policies over a multicast socket,
e.g. how does a Kernel VDE_switch work on multicast sockets?
If I send an ethernet packet over a multicast socket it can emulate just a
hub (Although it seems to me quite innatural to have to have TCP-UDP
over IP over Ethernet over UDP over IP - okay we can skip the Ethernet
on localhost, long ethernet frames get fragmentated but... details).
On multicast socket you cannot use policies, I mean a IPN network (or
bus or group) can have a policy reading some info on the packet to
decide the set of receipients.
For a vde_switch it is the destination mac address when found in the
MAC hash table to select the receipient port. For midi communication it
could be the channel number....
Moving the switching fabric to the userland the performance figures are
quite different.
renzo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists