lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1375905901.27403.22.camel@deadeye.wl.decadent.org.uk>
Date:	Wed, 7 Aug 2013 22:05:01 +0200
From:	Ben Hutchings <bhutchings@...arflare.com>
To:	Eliezer Tamir <eliezer.tamir@...ux.intel.com>
CC:	Shawn Bohrer <sbohrer@...advisors.com>,
	Amir Vadai <amirv@...lanox.com>, <netdev@...r.kernel.org>
Subject: Re: low latency/busy poll feedback and bugs

On Tue, 2013-08-06 at 21:25 +0300, Eliezer Tamir wrote:
> On 06/08/2013 21:08, Shawn Bohrer wrote:
> > On Tue, Aug 06, 2013 at 10:41:48AM +0300, Eliezer Tamir wrote:
> >> For multicast, it is possible that incoming packets to come from more
> >> than one port (and therefore more than one queue).
> >> I'm not sure how we could handle that, but what we have today won't do
> >> well for that use-case.
> >  
> > It is unclear to me exactly what happens in this case.  With my simple
> > patch I'm assuming it will spin on the receive queue that received the
> > last packet for that socket.  What happens when a packet arrives on a
> > different receive queue than the one we were spinning on? I assume it
> > is still delivered but perhaps the spinning process won't get it until
> > the spinning time expires?  I'm just guessing and haven't attempted to
> > figure it out from looking through the code.
> 
> What will happen is that the current code will only busy poll on one
> queue, sometimes on this one, sometimes on that one.
> 
> packets arriving on the other queue will still be serviced but will
> suffer the latency of waiting for NAPI to schedule.
> 
> So your avg will be better, but your std. dev. much worse and it's
> probably not worth it if you really expect two devices to receive
> data at the same time.

It seems like sk_mark_napi_id() should only be called on connected
sockets for now.

At least Solarflare controllers have 'wildcard' filters that can match
destination address (host, l4proto, port) only.  Perhaps ARFS could be
extended to include steering based on destination address when there is
a single unconnected socket bound to that address.  When that is
successful, busy- polling a single NAPI context should work nicely.

Where there are multiple unconnected sockets bound to the same unicast
address, it might make sense for polling on those sockets to prefer the
'local' NAPI context according to CPU topology.

Ben.

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ