lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B295F8C.4050905@caviumnetworks.com>
Date:	Wed, 16 Dec 2009 14:30:36 -0800
From:	David Daney <ddaney@...iumnetworks.com>
To:	Chetan Loke <chetanloke@...il.com>
CC:	Chris Friesen <cfriesen@...tel.com>, netdev@...r.kernel.org,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-mips <linux-mips@...ux-mips.org>
Subject: Re: Irq architecture for multi-core network driver.

Chetan Loke wrote:
>>> Does your hardware do flow-based queues?  In this model you have
>>> multiple rx queues and the hardware hashes incoming packets to a single
>>> queue based on the addresses, ports, etc. This ensures that all the
>>> packets of a single connection always get processed in the order they
>>> arrived at the net device.
>>>
>> Indeed, this is exactly what we have.
>>
>>
>>> Typically in this model you have as many interrupts as queues
>>> (presumably 16 in your case).  Each queue is assigned an interrupt and
>>> that interrupt is affined to a single core.
> 
>> Certainly this is one mode of operation that should be supported, but I
>> would also like to be able to go for raw throughput and have as many cores
>> as possible reading from a single queue (like I currently have).
>>
> Well, you could let the NIC firmware(f/w) handle this. The f/w would
> know which interrupt was just injected recently.In other words it
> would have a history of which CPU's would be available. So if some
> previously interrupted CPU isn't making good progress then the
> firmware should route the incoming response packets to a different
> queue. This way some other CPU will pick it up.
> 


It isn's a NIC.  There is no firmware.  The system interrupt hardware is 
what it is and cannot be changed.

My current implementation still has a single input queue configured and 
I get a maskable interrupt on a single CPU when packets are available. 
If the queue depth increases above a given threshold, I optionally send 
an IPI to another CPU to enable NAPI polling on that CPU.

Currently I have a module parameter that controls the maximum number of 
CPUs that will have NAPI polling enabled.

This allows me to get multiple CPUs doing receive processing without 
having to hack into the lower levels of the system's interrupt 
processing code to try to do interrupt steering.  Since all the 
interrupt service routine was doing was call netif_rx_schedule(), I can 
simply do this via smp_call_function_single().

David Daney
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ