lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070824.143751.112614506.davem@davemloft.net>
Date:	Fri, 24 Aug 2007 14:37:51 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	ossthema@...ibm.com
Cc:	netdev@...r.kernel.org, raisch@...ibm.com, themann@...ibm.com,
	linux-kernel@...r.kernel.org, linuxppc-dev@...abs.org,
	meder@...ibm.com, tklein@...ibm.com, stefan.roscher@...ibm.com
Subject: Re: RFC: issues concerning the next NAPI interface

From: Jan-Bernd Themann <ossthema@...ibm.com>
Date: Fri, 24 Aug 2007 15:59:16 +0200

> 1) The current implementation of netif_rx_schedule, netif_rx_complete
>    and the net_rx_action have the following problem: netif_rx_schedule
>    sets the NAPI_STATE_SCHED flag and adds the NAPI instance to the poll_list.
>    netif_rx_action checks NAPI_STATE_SCHED, if set it will add the device
>    to the poll_list again (as well). netif_rx_complete clears the NAPI_STATE_SCHED.
>    If an interrupt handler calls netif_rx_schedule on CPU 2
>    after netif_rx_complete has been called on CPU 1 (and the poll function 
>    has not returned yet), the NAPI instance will be added twice to the 
>    poll_list (by netif_rx_schedule and net_rx_action). Problems occur when 
>    netif_rx_complete is called twice for the device (BUG() called)

Indeed, this is the "who should manage the list" problem.
Probably the answer is that whoever transitions the NAPI_STATE_SCHED
bit from cleared to set should do the list addition.

Patches welcome :-)

> 3) On modern systems the incoming packets are processed very fast. Especially
>    on SMP systems when we use multiple queues we process only a few packets
>    per napi poll cycle. So NAPI does not work very well here and the interrupt 
>    rate is still high. What we need would be some sort of timer polling mode 
>    which will schedule a device after a certain amount of time for high load 
>    situations. With high precision timers this could work well. Current
>    usual timers are too slow. A finer granularity would be needed to keep the
>    latency down (and queue length moderate).

This is why minimal levels of HW interrupt mitigation should be enabled
in your chip.  If it does not support this, you will indeed need to look
into using high resolution timers or other schemes to alleviate this.

I do not think it deserves a generic core networking helper facility,
the chips that can't mitigate interrupts are few and obscure.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ