lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <564630D2.4020307@gmail.com>
Date:	Fri, 13 Nov 2015 10:49:54 -0800
From:	Alexander Duyck <alexander.duyck@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Jeff Kirsher <jeffrey.t.kirsher@...el.com>, davem@...emloft.net,
	Jesse Brandeburg <jesse.brandeburg@...el.com>,
	netdev@...r.kernel.org, nhorman@...hat.com, sassmann@...hat.com,
	jogreene@...hat.com
Subject: Re: [net-next 04/17] drivers/net/intel: use napi_complete_done()

On 11/13/2015 08:49 AM, Eric Dumazet wrote:
> On Fri, 2015-11-13 at 08:06 -0800, Alexander Duyck wrote:
>
>> Yes, I'm pretty certain you cannot use this napi_complete_done with
>> anything that support busy poll sockets.  The problem is you need to
>> flush any existing lists before yielding to the socket polling in order
>> to avoid packet ordering issues between the NAPI polling routine and the
>> socket polling routine.
> My plan is to make busy poll independent of GRO / RPS / RFS, and generic
> if possible, for all NAPI drivers. (No need to absolutely provide
> ndo_busy_poll()
>
> I really do not see GRO being a problem for low latency : RPC messages
> are terminated by PSH flag that take care of flushing GRO engine.

Right.  I wasn't thinking so much about GRO delaying the frames as the 
fact that ixgbe will call netif_receive_skb if busy polling instead of 
napi_gro_receive.  So you might have frames left in the GRO list that 
would get bypassed if pulled out during busy polling.

> For mixed use, (low latency and other kind of flows), GRO is a win.

Agreed.

> With the following sk_busy_loop() , we :
>
> - allow tunneling traffic to use busy poll as well as native traffic.
> - allow RFS/RPS being used (sending IPI to other cpus if needed)
> - use the 'lets burn cpu cycles' to do useful work (like TX completions, RCU callbacks...)
> - Implement busy poll for all NAPI drivers.
>
>          rcu_read_lock();
>          napi = napi_by_id(sk->sk_napi_id);
>          if (!napi)
>                  goto out;
>          ops = napi->dev->netdev_ops;
>
>          for (;;) {
>                  local_bh_disable();
>                  rc = 0;
>                  if (ops->ndo_busy_poll) {
>                          rc = ops->ndo_busy_poll(napi);
>                  } else if (napi_schedule_prep(napi)) {
>                          rc = napi->poll(napi, 4);
>                          if (rc == 4) {
>                                  napi_complete_done(napi, rc);
>                                  napi_schedule(napi);
>                          }
>                  }
>                  if (rc > 0)
>                          NET_ADD_STATS_BH(sock_net(sk),
>                                           LINUX_MIB_BUSYPOLLRXPACKETS, rc);
>                  local_bh_enable();
>
>                  if (rc == LL_FLUSH_FAILED ||
>                      nonblock ||
>                      !skb_queue_empty(&sk->sk_receive_queue) ||
>                      need_resched() ||
>                      busy_loop_timeout(end_time))
>                          break;
>
>                  cpu_relax();
>          }
>          rcu_read_unlock();

Sounds good.

- Alex
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ