lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 21 Feb 2011 18:25:43 +0200
From:	Felix Radensky <felix@...edded-sol.com>
To:	arnd@...db.de
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: Advice on network driver design

Hi Anrd,

On 02/20/2011 09:13 PM, arnd@...db.de wrote:
> On Saturday 19 February 2011 14:37:43 Felix Radensky wrote:
>> Hi,
>>
>> I'm in the process of designing a network driver for a custom
>> hardware and would like to get some advice from linux network
>> gurus.
>>
>> The host platform is Freescale P2020. The custom hardware is
>> FPGA with several TX FIFOs, single RX FIFO and a set of registers.
>> FPGA is connected to CPU via PCI-E. Host CPU DMA controller is used
>> to get packets to/from FIFOs. Each FIFO has its set of events,
>> generating interrupts, which can be enabled and disabled. Status
>> register reflects the current state of events, the bit in status
>> register is cleared by FPGA when event is handled. Reads or writes to
>> status register have no impact on its contents.
>>
>> The device driver should support 80Mbit/sec of traffic in each direction.
>>
>> So far I have TX side working. I'm using Linux dmaengine APIs to
>> transfer packets to FIFOs. The DMA completion interrupt is handled
>> by per-fifo work queue.
>>
>> My question is about RX. Would such design benefit from NAPI ?
>> If my understanding of NAPI is correct, it runs in softirq context,
>> so I cannot do any DMA work in dev->poll(). If I were to use NAPI,
>> I should probably disable RX interrupts, do all DMA work in some
>> work queue, keep RX packets in a list and only then call dev->poll().
>> Is that correct ?
>>
>> Any other advice and how to write an efficient driver for this
>> hardware is most welcome. I can influence FPGA design to some degree,
>> so if you think FPGA should be changed to improve things, please let
>> me know.
> There are currently discussions ongoing about using virtio for this
> kind of connection. See http://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg49294.html
> for an archive.
>
> When you use virtio as the base, you can use the regular virtio-net
> driver or any other virtio high-level driver on top.
>
> 	Arnd
>
Thanks, I'll take a look at virtio. Some people seem to think it's an 
overkill for this task.

Felix.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ