lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140409145129.GA4002@sbohrermbp13-local.rgmadvisors.com>
Date:	Wed, 9 Apr 2014 09:51:29 -0500
From:	Shawn Bohrer <shawn.bohrer@...il.com>
To:	Edward Cree <ecree@...arflare.com>
Cc:	netdev@...r.kernel.org, Shawn Bohrer <sbohrer@...advisors.com>,
	Jonathan Cooper <jcooper@...arflare.com>,
	eric.dumazet@...il.com
Subject: Re: udp: Question about busy_poll change

On Wed, Apr 09, 2014 at 03:13:21PM +0100, Edward Cree wrote:
> Commit 005ec9743394010cd37d86c3fd2e81978231cdbf, "udp: Only allow busy
> read/poll on connected sockets",
> causes a performance regression (increased latency) on some
> micro-benchmarks which don't connect() their UDP socket, and might well
> have a similar effect on real applications that do the same thing.
> As far as I can tell, the change is only needed in the case where the
> UDP socket is bound to INADDR_ANY, _and_ traffic on the chosen UDP port
> is received through more than one NIC (or perhaps through a NIC with
> separate NAPI contexts per receive queue?)

Yep.  The separate NAPI contexts per receive queue part was one of the
main reasons, though I suppose the INADDR_ANY case is relevant as
well.

> In particular, busy polling makes sense for a client (which will only be
> receiving packets from one remote address even though the socket is
> unconnected), or for a socket which has been bound to an interface's
> address (at least in the case of sfc, where we have one NAPI context for
> all the receive queues on an interface).

I agree that it makes sense in this case but if you meet these
requirements then you can also connect your UDP socket.  The real
problem is that there is no way for the kernel to know that you will
only receive packets from a single remote address, so you have to
connect.

I believe the sfc case where you only have a single NAPI context is
also valid and it seems reasonable to me that if you can detect that
specific case that busy polling could be allowed.  I'm not sure how to
detect this.  I'm sure patches are welcome.

> So, what was the deeper rationale for this change?  Is there a
> correctness issue or does the old behaviour just affect performance
> through unnecessary busy_polling?  Or have I just misunderstood things
> completely?

If we are spinning on a NAPI context and a packet arrives in a
different rx queue then you'll get unpredictable latencies and
out of order packets.  For the people using this feature that is
probably not desirable.

--
Shawn
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ