[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <517B242D.7040902@ti.com>
Date: Fri, 26 Apr 2013 20:04:45 -0500
From: Suman Anna <s-anna@...com>
To: Jassi Brar <jaswinder.singh@...aro.org>
CC: Loic PALLARDY <loic.pallardy@...com>,
Jassi Brar <jassisinghbrar@...il.com>,
"Ohad Ben-Cohen (ohad@...ery.com)" <ohad@...ery.com>,
Stephen Rothwell <sfr@...b.auug.org.au>,
"Andy Green (andy.green@...aro.org)" <andy.green@...aro.org>,
Russell King <linux@....linux.org.uk>,
Arnd Bergmann <arnd@...db.de>,
Tony Lindgren <tony@...mide.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Linus Walleij <linus.walleij@...aro.org>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"Omar Ramirez Luna (omar.ramirez@...itl.com)"
<omar.ramirez@...itl.com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCHv3 00/14] drivers: mailbox: framework creation
Hi Jassi,
On 04/25/2013 10:46 PM, Jassi Brar wrote:
> Hi Suman,
>
> On 26 April 2013 03:59, Suman Anna <s-anna@...com> wrote:
>> On 04/25/2013 12:20 AM, Jassi Brar wrote:
>> tranmitting right away. OK, I thought you didn't want buffering, if that
>> is not the case, then the buffering should be within the main driver
>> code, like it is now, but configurable based on the controller or
>> mailbox properties. If it is present in individual controller drivers,
>> then we would be duplicating stuff. Are you envisioning that this be
>> left to the individual controllers?
>>
> Please don't accuse me of such bad visions :)
> I never said no-buffering and I never said buffering should be in
> controller drivers. In fact I don't remember ever objecting to how
> buffering is done in TI's framework.
> A controller could service only 1 request at a time so lets give it
> just 1 at a time. Let the API handle the complexity of buffering.
>
Alright, guess this got lost in translation :). I interpreted based on
the fact that you wanted to get rid of the size field from the
mailbox_msg definition. Do you have a different mechanism in mind for
the buffering compared to the present one?
>>> I am afraid you are confusing the meaning of 'atomic context' here.
>>> atomic context doesn't mean instant transmission of data, but that the
>>> API calls could be made from even atomic context and that the client &
>>> controller can't sleep in callbacks from the API. So it's not moot.
>>
>> I understood the atomic context, and the question is about the behavior
>> of the '.tx_done' callback when sending from atomic context. Is there
>> such a usecase/need for you in that you want to send a response back
>> from an atomic context, yet get a callback?
>>
> Let me get in detail...
> The TX-Wheel has to tick. Someone has to tell the framework that the
> last TX was consumed by the remote and now it's time to submit the
> next TX (RX will always be driven by the controller's IRQ so it's
> straight).
> If the controller h/w gets some interrupt indicating
> Remote-RTR/TX-Done then the ticker is driven by controller's TX-IRQ
> handler. Otherwise, if the controller does sense RTR but not report
> (by reading status in some register but no irq), then API has to poll
> it periodically and move the ticker. If the controller can neither
> report nor sense RTR, the client/protocol driver must run the ticker
> (usually upon receiving some ACK packet on the RX channel).
OK, I didn't think of a no RTR interrupt-based controller. I would thing
that such a controller is very rudimentary. I wonder if there are any
controllers like this out there.
> This TX ticker should be callable from atomic context (controller's
> IRQ handler) and calls into callback of the client. It is desirable
> that the client be able to submit yet another TX request from the
> callback. That way the client can avoid having to schedule work from
> the callback if the TX doesn't involve any sleepable task. The scheme
> is working very well in DMA-Engine stack.
>
> BTW, TI's RX mechanism too seems broken for common API. Receiving
> every few bytes via 'notify' mechanism is very inefficient. Imagine a
> platform with no shared memory between co-processors and the local
> wants to diagnose the remote by asking critical data at least KBs in
> size.
No shared memory between co-processors and a relatively slow wire
transport is a bad architecture design to begin with.
> So when API has nothing to do with received packet and the controller
> has to get rid of it asap so as to be able to receive the next, IMHO
> there should be short-circuit from controller to client via the API.
> No delay, no buffering of RX.
The current TI design is based on the fact that we can get multiple
messages on a single interrupt due to the h/w fifo and the driver takes
care of the bottom-half. Leaving it to the client is putting a lot of
faith in the client and doesn't scale to multiple clients. The client
would have to perform mostly the same as the driver is doing - so this
goes back to the base discussion point that we have - which is the lack
of support for atomic_context receivers in the current code. I perceive
this as an attribute of the controller/mailbox device itself rather than
the client.
>
>
>>> It's the controller driver that actually puts the data on the bus. So
>>> only it should define the format in which it accepts data from the
>>> clients. Every client should simply populate the packet structure
>>> defined in my_lovely_controller.h and pass on the struct pointer to
>>> the controller driver via API.
>>> No negotiations for the driver seat among passengers :)
>>
>> OK, I was trying to avoid including my_lovely_controller.h and only
>> include the standard .h file as a client user, the client would anyway
>> need to have the intrinsic knowledge of the packet structure.
>>
> Not including my_controller.h doesn't make things standard.
> As we know, the client anyway has to have intrinsic knowledge of the
> packet structure(which is dictated by the controller), so not
> including my_controller.h will only confuse people as to where the
> packet info came from?
OK, agreed.
>
>>>>
>>> I think the mailbox should be exclusively held by a client. That makes
>>> many things simpler. Also remote firmwares won't be always robust
>>> enough to handle commands from different subsystems intermixed. The
>>> API only has to make sure the mailbox_get/put operations are very
>>> thin.
>>
>> This might be the case for specific remotes where we expect only one
>> client driver to be responsible for talking to it, but for generic
>> offloading, you do not want to have this restriction. You do not want
>> peer clients to go through a single main client, as the latencies or the
>> infrastructure imposed by the main client may not be suitable for the
>> other clients. The stricter usecase here would be the shareable mailbox,
>> and if it is exclusive, as dictated by a controller or device property,
>> then so be it and things would get simplified for that controller/device.
>>
> Shared Vs Exclusive had been the dilemma of DMAEngine too.
> If the controller has physical channels at least as many as clients,
> exclusivity is no problem.
> Sharing is desirable when the controller has to serve clients more
> than its physical channels. We solve that by having the controller
> declare exclusive virtual channels and internally scheduling the
> requests onto physical channels.
> And as Andy pointed out, some remote-ends may not cope with requests
> coming from different subsystems intermixed.
Even though both the scenarios look very similar, I believe there are
some slight differences. All the devices belonging to a controller may
not be of the same type (meaning, intended towards the same remote or be
used interchangeably with one another). It is definitely possible if you
have a similar scenario to the DMA physical channels and your remote
rx interrupt can identify the device/channel to process. This would be
very much dependent on the architecture of a controller. The particular
example that I have in mind is s/w clients between the same set of
remote and host entities using the same device - the send part is anyway
arbitrated by the controller, and the same received message can be
delivered to the clients, with the clients making the decision whether
the packet belongs to them or not. I agree that all remote-ends will not
be able to cope up intermixed requests, but isn't this again a
controller architecture dependent?
regards
Suman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists