lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46D4376E.3000900@katalix.com>
Date:	Tue, 28 Aug 2007 15:55:42 +0100
From:	James Chapman <jchapman@...alix.com>
To:	Jan-Bernd Themann <ossthema@...ibm.com>
CC:	David Miller <davem@...emloft.net>,
	shemminger@...ux-foundation.org, akepner@....com,
	netdev@...r.kernel.org, raisch@...ibm.com, themann@...ibm.com,
	linux-kernel@...r.kernel.org, linuxppc-dev@...abs.org,
	meder@...ibm.com, tklein@...ibm.com, stefan.roscher@...ibm.com
Subject: Re: RFC: issues concerning the next NAPI interface

Jan-Bernd Themann wrote:
> On Tuesday 28 August 2007 11:22, James Chapman wrote:
>>> So in this scheme what runs ->poll() to process incoming packets?
>>> The hrtimer?
>> No, the regular NAPI networking core calls ->poll() as usual; no timers 
>> are involved. This scheme simply delays the napi_complete() from the 
>> driver so the device stays in the poll list longer. It means that its 
>> ->poll() will be called when there is no work to do for 1-2 jiffies, 
>> hence the optimization at the top of ->poll() to efficiently handle that 
>> case. The device's ->poll() is called by the NAPI core until it has 
>> continuously done no work for 1-2 jiffies, at which point it finally 
>> does the netif_rx_complete() and re-enables its interrupts.
>>
> I'm not sure if I understand your approach correctly.
> This approach may reduce the number of interrupts, but it does so
> by blocking the CPU for up to 1 jiffy (that can be quite some time
> on some platforms). So no other application / tasklet / softIRQ type
> can do anything in between.

I think I've misread the reworked NAPI net_rx_action code. I thought 
that it ran each device ->poll() just once, rescheduling the NET_RX 
softirq again if a device stayed in polled mode. I can see now that it 
loops while one or more devices stays in the poll list for up to a 
jiffy, just like it always has. So by keeping the device in the poll 
list and not consuming quota, net_rx_action() spins until the next jiffy 
tick unless another device consumes quota, like you say.

I can only assume that the encouraging results that I get with this 
scheme are specific to my test setups (measuring packet forwarding 
rates). I agree that it isn't desirable to tie up the CPU for up to a 
jiffy in net_rx_action() in order to do this. I need to go away and 
rework my ideas. Perhaps it is possible to get the behavior I'm looking 
for by somehow special-casing the zero return from ->poll() in 
net_rx_action(), but I'm not sure.

Thanks for asking questions.

-- 
James Chapman
Katalix Systems Ltd
http://www.katalix.com
Catalysts for your Embedded Linux software development

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ