lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Sep 2007 17:26:33 +0100
From:	James Chapman <jchapman@...alix.com>
To:	Stephen Hemminger <shemminger@...ux-foundation.org>
CC:	hadi@...erus.ca, Bill Fink <billfink@...dspring.com>,
	netdev@...r.kernel.org, davem@...emloft.net, jeff@...zik.org,
	mandeep.baines@...il.com, ossthema@...ibm.com
Subject: Re: RFC: possible NAPI improvements to reduce interrupt rates for
 low traffic rates

Stephen Hemminger wrote:
> On Wed, 12 Sep 2007 14:50:01 +0100
> James Chapman <jchapman@...alix.com> wrote:
>> By low traffic, I assume you mean a rate at which the NAPI driver 
>> doesn't stay in polled mode. The problem is that that rate is getting 
>> higher all the time, as interface and CPU speeds increase. This results 
>> in too many interrupts and NAPI thrashing in/out of polled mode very 
>> quickly.
> 
> But if you compare this to non-NAPI driver the same softirq
> overhead happens. The problem is that for many older devices disabling IRQ's
> require an expensive non-cached PCI access. Smarter, newer devices
> all use MSI which is pure edge triggered and with proper register
> usage, NAPI should be no worse than non-NAPI.

While MSI is good, the CPU interrupt overhead (saving/restoring CPU 
registers) can hurt bad, especially for RISC CPUs. When packet 
processing is interrupt-driven, the kernel's scheduler plays second 
fiddle to hardware interrupt and softirq scheduling. Even super-priority 
real-time threads don't get a look in.

When traffic rates cause 1 interrupt per tx/rx packet event, NAPI will 
use more CPU and have higher latency than non-NAPI because of the extra 
work done to enter and leave polled mode. At higher packet rates, NAPI 
works very well, unlike non-NAPI which usually needs hardware interrupt 
mitigation to avoid interrupt live-lock.

I think NAPI should be a _requirement_ for new net drivers. But I 
recognize that it has some issues, hence this thread.

-- 
James Chapman
Katalix Systems Ltd
http://www.katalix.com
Catalysts for your Embedded Linux software development

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ