lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 27 May 2011 10:31:10 +0200
From:	Wolfgang Grandegger <wg@...ndegger.com>
To:	Oliver Hartkopp <socketcan@...tkopp.net>
CC:	Arnd Bergmann <arnd@...db.de>, sachi@...tralsolutions.com,
	davinci-linux-open-source@...ux.davincidsp.com,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Subhasish Ghosh <subhasish@...tralsolutions.com>,
	nsekhar@...com, open list <linux-kernel@...r.kernel.org>,
	CAN NETWORK DRIVERS <socketcan-core@...ts.berlios.de>,
	Marc Kleine-Budde <mkl@...gutronix.de>,
	linux-arm-kernel@...ts.infradead.org, Netdev@...r.kernel.org,
	m-watkins@...com
Subject: Re: [PATCH v4 1/1] can: add pruss CAN driver.

Hi Oliver,

sorry for the late answer.

On 05/23/2011 08:21 AM, Oliver Hartkopp wrote:
> On 22.05.2011 12:30, Arnd Bergmann wrote:
>> On Thursday 12 May 2011 16:41:58 Oliver Hartkopp wrote:
>>> E.g. assume you need the CAN-IDs 0x100, 0x200 and 0x300 in your application
>>> and for that reason you configure these IDs in the pruss CAN driver.
>>>
>>> What if someone generates a 100% CAN busload exactly on CAN-ID 0x100 then?
>>>
>>> Worst case (1MBit/s, DLC=0) you would need to handle about 21.000 irqs/s for
>>> the correctly received CAN frames with the filtered CAN-ID 0x100 ...
>>
>> Then I guess the main thing that a "smart" CAN implementation like pruss
>> should do is interrupt mitigation. When you have a constant flow of
>> packets coming in, the hardware should be able to DMA a lot of
>> them into kernel memory before the driver is required to pick them up,
>> and only get into interrupt driven mode when the kernel has managed
>> to process all outstanding packets.
>>
>>> This all depends heavily on Linux networking (skb handling, caching, etc) and
>>> is pretty fast and optimized!! That was also the reason why it ran on the old
>>> PowerPC that smoothly. The mostly seen effect if anything drops is when the
>>> application (holding the socket) was not fast enough to handle the incoming
>>> data. NB: For that reason we implemented a CAN content filter (CAN_BCM) that
>>> is able to do content filtering and timeout monitoring in Kernelspace - all
>>> performed in the SoftIRQ.
>>
>> Right, dropping packets that no process is waiting for should be done as
>> early as possible. In pruss-can, the idea was to do it in hardware, which
>> doesn't really work all that well for the reasons discussed before.
>> Dropping the frames in the NAPI poll function (softirq time) seems like a
>> logical choice.
> 
> In 'real world' CAN setups you'll never see 21.000 CAN frames per second (and
> therefore 21.000 irqs/s) - you are usually designing CAN network traffic with
> less than 60% busload. So interrupt rates somewhere below 1000 irqs/s can be
> assumed.
> 
>>>From what i've seen so far a 3-4 messages rx FIFO and NAPI support just make it.

I think you speak about the SJA100 which is able to buffer 64 byte
corresponding to up to 4 messages. There are CAN controllers able to
queue more or just one message and then NAPI adds overhead.

> @Marc/Wolfgang: Would this be also your recommendation for a CAN controller
> design that supports SocketCAN in the best way?

Anyway, NAPI *always* useful as it helps with the infamous interrupt
flooding.

Wolfgang.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ