[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <FF7761E8-377A-43AD-96B2-83BA140E030B@goldelico.com>
Date: Sun, 21 Aug 2016 20:23:10 +0200
From: "H. Nikolaus Schaller" <hns@...delico.com>
To: One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>
Cc: Sebastian Reichel <sre@...nel.org>, Rob Herring <robh@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Marcel Holtmann <marcel@...tmann.org>,
Jiri Slaby <jslaby@...e.com>, Pavel Machek <pavel@....cz>,
Peter Hurley <peter@...leysoftware.com>,
NeilBrown <neil@...wn.name>, Arnd Bergmann <arnd@...db.de>,
Linus Walleij <linus.walleij@...aro.org>,
"open list:BLUETOOTH DRIVERS" <linux-bluetooth@...r.kernel.org>,
"linux-serial@...r.kernel.org" <linux-serial@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 0/3] UART slave device bus
> Am 21.08.2016 um 19:09 schrieb One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>:
>
>> Let me ask a question about your centralized and pre-cooked buffering approach.
>>
>> As far as I see, even then the kernel API must notify the driver at the right moment
>> that a new block has arrived. Right?
>
> The low level driver queues words (data byte, flag byte)
> The buffer processing workqueue picks those bytes from the queue and
> atomically empties the queue
When and how fast is the work queue scheduled?
And by which event?
> The workqueue involves the receive handler.
This should be faster than if a driver directly processes incoming bytes?
>
>> But how does the kernel API know how long such a block is?
>
> It's as long as the data that has arrived in that time.
Which means the work queue handler have to decide if it is enough for a
frame to decode and if not, wait a little until more arrives.
Or you have to assemble chunks into a frame, i.e. copy data around.
Both seems a waste of scarce cpu cycles in high-speed situations to me.
>
>> Usually there is a start byte/character, sometimes a length indicator, then payload data,
>> some checksum and finally a stop byte/character. For NMEA it is $, no length, * and \r\n.
>> For other serial protocols it might be AT, no length, and \r. Or something different.
>> HCI seems to use 2 byte op-code or 1 byte event code and 1 byte parameter length.
>
> It doesn't look for any kind of protocol block headers.
Which might become the pitfall of the design because as I have described it is an
essential part of processing UART based protocols. You seem to focus on efficiently
buffering only but not about efficiently processing the queued data.
> The routine
> invoked by the work queue does any frame recovery.
>
>> So I would even conclude that you usually can't even use DMA based UART receive
>> processing for arbitrary and not well-defined protocols. Or have to assume that the
>
> We do, today for bluetooth and other protocols just fine
I think it works (even with user-space HCI daemon) because bluetooth HCI is slow (<300kByte/s).
> - it's all about
> data flows not about framing in the protocol sense.
Yes, but you should also take framing into account for a solution that helps to implement
UART slave devices. That is my concern.
BR,
Nikolaus
Powered by blists - more mailing lists