lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 6 Aug 2013 13:08:06 -0500
From:	Shawn Bohrer <sbohrer@...advisors.com>
To:	Eliezer Tamir <eliezer.tamir@...ux.intel.com>
Cc:	Amir Vadai <amirv@...lanox.com>, netdev@...r.kernel.org
Subject: Re: low latency/busy poll feedback and bugs

On Tue, Aug 06, 2013 at 10:41:48AM +0300, Eliezer Tamir wrote:
> On 06/08/2013 00:22, Shawn Bohrer wrote:
> > 3) I don't know if this was intentional, an oversight, or simply a
> > missing feature but UDP multicast currently is not supported.  In
> > order to add support I believe you would need to call
> > sk_mark_napi_id() in __udp4_lib_mcast_deliver().  Assuming there isn't
> > some intentional reason this wasn't done I'd be happy to test this and
> > send a patch.
> 
> This is still WIP, so our goal was to make it easy to extend for new
> cases and protocols.
> 
> For multicast, it is possible that incoming packets to come from more
> than one port (and therefore more than one queue).
> I'm not sure how we could handle that, but what we have today won't do
> well for that use-case.
 
It is unclear to me exactly what happens in this case.  With my simple
patch I'm assuming it will spin on the receive queue that received the
last packet for that socket.  What happens when a packet arrives on a
different receive queue than the one we were spinning on? I assume it
is still delivered but perhaps the spinning process won't get it until
the spinning time expires?  I'm just guessing and haven't attempted to
figure it out from looking through the code.

I put together a small test case with two senders and a single
receiver, and visually (by watching /proc/interrups) verified that
their traffic went to two different queues.  The receiver received all
of the packets with busy_read enabled so it appears that it at least
superficially works.  I did not verify the effect on latency.

> What do you use for testing?

In fio 2.1.2 [1] I added support for UDP multicast.  It's not quite as
flexible as I would like but you can still test a number of scenarios
like the one above or do a basic pingpong test.

Here are my fio job files for a pingpong test:

$ cat mcudp_rr_receive
[global]
ioengine=net
protocol=udp
bs=64
size=100m
# IP address of interface to receive packets
#interface=10.8.16.21
rw=read

[pingpong]
pingpong=1
port=10000
hostname=239.0.0.0

$ cat mcudp_rr_send
[global]
ioengine=net
protocol=udp
bs=64
size=100m
# IP address of interface to send packets
#interface=10.8.16.22
rw=write

[pingpong]
pingpong=1
port=10000
hostname=239.0.0.0

Just start the receiver on one host then start the sender on a second
host:

[host1] $ fio mcudp_rr_receive

[host2] $ fio mcudp_rr_send

[1] http://brick.kernel.dk/snaps/fio-2.1.2.tar.bz2

--
Shawn

-- 

---------------------------------------------------------------
This email, along with any attachments, is confidential. If you 
believe you received this message in error, please contact the 
sender immediately and delete all copies of the message.  
Thank you.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ