[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABb+yY1S52aK63coeNuiH+Ca8GERq=ukCPFidcitUEMhHVdsyg@mail.gmail.com>
Date: Wed, 16 Mar 2016 10:46:01 +0530
From: Jassi Brar <jassisinghbrar@...il.com>
To: Nishanth Menon <nm@...com>
Cc: Devicetree List <devicetree@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Rob Herring <robh+dt@...nel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
Santosh Shilimkar <ssantosh@...nel.org>,
Franklin S Cooper Jr <fcooper@...com>
Subject: Re: [PATCH V2 2/2] mailbox: Introduce TI message manager driver
On Tue, Mar 15, 2016 at 10:35 PM, Nishanth Menon <nm@...com> wrote:
> On Tue, Mar 15, 2016 at 12:31 AM, Jassi Brar <jassisinghbrar@...il.com> wrote:
>> On Tue, Mar 8, 2016 at 8:07 PM, Nishanth Menon <nm@...com> wrote:
>>> Jassi,
>>>
>>> On Tue, Mar 8, 2016 at 1:10 AM, Jassi Brar <jassisinghbrar@...il.com> wrote:
>>>> On Tue, Mar 8, 2016 at 2:18 AM, Nishanth Menon <nm@...com> wrote:
>>>>> On 03/07/2016 12:31 PM, Jassi Brar wrote:
>>>>>> On Fri, Mar 4, 2016 at 8:05 PM, Nishanth Menon <nm@...com> wrote:
>>>>>>>>
>>>>>>>>> +static int ti_msgmgr_send_data(struct mbox_chan *chan, void *data)
>>>>>>>>> +{
>>>>>>>>> + struct device *dev = chan->mbox->dev;
>>>>>>>>> + struct ti_msgmgr_inst *inst = dev_get_drvdata(dev);
>>>>>>>>> + const struct ti_msgmgr_desc *desc;
>>>>>>>>> + struct ti_queue_inst *qinst = chan->con_priv;
>>>>>>>>> + int msg_count, num_words, trail_bytes;
>>>>>>>>> + struct ti_msgmgr_message *message = data;
>>>>>>>>> + void __iomem *data_reg;
>>>>>>>>> + u32 *word_data;
>>>>>>>>> +
>>>>>>>>> + if (WARN_ON(!inst)) {
>>>>>>>>> + dev_err(dev, "no platform drv data??\n");
>>>>>>>>> + return -EINVAL;
>>>>>>>>> + }
>>>>>>>>> + desc = inst->desc;
>>>>>>>>> +
>>>>>>>>> + if (desc->max_message_size < message->len) {
>>>>>>>>> + dev_err(dev, "Queue %s message length %d > max %d\n",
>>>>>>>>> + qinst->name, message->len, desc->max_message_size);
>>>>>>>>> + return -EINVAL;
>>>>>>>>> + }
>>>>>>>>> +
>>>>>>>>> + /* Are we able to send this or not? */
>>>>>>>>> + msg_count = ti_msgmgr_queue_get_num_messages(qinst);
>>>>>>>>> + if (msg_count >= desc->max_messages) {
>>>>>>>>> + dev_warn(dev, "Queue %s is full (%d messages)\n", qinst->name,
>>>>>>>>> + msg_count);
>>>>>>>>> + return -EBUSY;
>>>>>>>>> + }
>>>>>>>> This seems fishy. mailbox api always submit 1 'complete' message to be
>>>>>>>> sent and checks for completion by last_tx_done() before calling
>>>>>>>> send_data() again. Controller drivers are not supposed to queue
>>>>>>>> messages - mailbox core does. So you should never be unable to send a
>>>>>>>> message.
>>>>>>>
>>>>>>>
>>>>>>> OK-> to explain this, few reasons: (queue messages check and usage of
>>>>>>> last_tx_done are kind of intertwined answer..
>>>>>>> a) we need to remember that the message manager has a shared RAM.
>>>>>>> multiple transmitter over other queues can be sharing the same.
>>>>>>> unfortunately, we dont get a threshold kind of interrupt or status that
>>>>>>> I am able to exploit in the current incarnation of the solution. The
>>>>>>> best we can do in the full system is to constrain the number of messages
>>>>>>> that are max pending simultaneously in each of the queue from various
>>>>>>> transmitters in the SoC.
>>>>>>> b) last_tx_done() is checked if TXDONE_BY_POLL, not if TXDONE_BY_ACK
>>>>>>> right? which is how this'd work since txdone_poll is false -> that is
>>>>>>> how we want this mechanism to work once the far end is ready for next
>>>>>>> message, it acks. I do see your point about being tied to protocol - I
>>>>>>> dont like it either.. in fact, I'd prefer that client registration
>>>>>>> mention what kind of handshaking is necessary, but: a) that is not how
>>>>>>> mailbox framework is constructed at the moment(we state txdone_poll at
>>>>>>> mailbox registration, not at client usage) and b) I have no real need
>>>>>>> for multiple clients to users of message manager who actually need
>>>>>>> non-ACK usage - even for the foreseeable future (at least 1 next
>>>>>>> generation of SoC) - if such a need does arise in the future, I will
>>>>>>> have to rework framework and make this capability at the registration
>>>>>>> time of the client - allowing each client path to use different
>>>>>>> mechanisms on hardware such as these that need it.
>>>>>>> c) message manager can actually queue more than one message(depending on
>>>>>>> client capability). Even though, at this point, we are not really
>>>>>>> capable of doing it(again from what I can see for immediate future),
>>>>>>> mailbox framework by checking last_tx_done forces a single message
>>>>>>> sequencing - which is not really exploiting the capability of the
>>>>>>> hardware - in theory, we should be able to queue max num messages, hit
>>>>>>> cpuidle and snooze away while the remote entity chomp away data at it's
>>>>>>> own pace and finally give us a notification back - but again, we can
>>>>>>> argue it is indeed protocol dependent, so setting txdone_poll to false
>>>>>>> actually enables that to be done in user. Again - i have no immediate
>>>>>>> need for any queued multiple transfer needs yet.. even if i need to, in
>>>>>>> the future, it can easily be done by the client by maintaining code as
>>>>>>> is - txdone_poll is false.
>>>>>>>
>>>>>> All I suggest is that the controller does not queue more than 1
>>>>>> message at a time, which means the controller driver allows for
>>>>>> maximum possible resources taken by a message.
>>>>>> The buffering is already done by the core, and if for your 'batch
>>>>>> dispatch' thing the client could simply flush them to remote by
>>>>>> pretending it got the ack (which is no worse than simply sending all
>>>>>> messages to remote without caring if the first was successful or not).
>>>>>
>>>>> Are you suggesting to set txdone_poll is true?
>>>> No.
>>>>
>>>>> the controller is quite
>>>>> capable of queueing more than 1 message at a time. This the reason for
>>>>> letting the client choose the mode of operation - use ack mechanism for
>>>>> operation. client can choose to ignore the buffering in the controller,
>>>>> as you mentioned, but then, why force txdone_poll to true and deny the
>>>>> usage of the queue capability of the hardware?
>>>>>
>>>> irq/poll/ack whatever you use, there is no valid reason to buffer
>>>> messages in the controller driver. Please let me know what usecase you
>>>> have in mind that must have messages buffered in controller driver and
>>>> not core.
>>>
>>> I am confused, I am _not_ buffering any tx data in the controller
>>> driver - rx data is stored in a temp buffer to send up the stack -
>>> that is just regular practise, right?
>>>
>> right.
>>
>>> In tx, I just check to ensure
>>> that the queue has'nt run out prior to transmission since the hardware
>>> is capable of queueing - ok, in a single transmitter system it is
>>> probably a little overkill, but we would like to function in multiple
>>> producer SoC as well. What am I missing here?
>>>
>> In send_data() you have ...
>>
>> + /* Are we able to send this or not? */
>> + msg_count = ti_msgmgr_queue_get_num_messages(qinst);
>> + if (msg_count >= desc->max_messages) {
>> + dev_warn(dev, "Queue %s is full (%d messages)\n", qinst->name,
>> + msg_count);
>> + return -EBUSY;
>> + }
>>
>> That is, you check if there are some messages in the TX-Queue already.
>> I am not sure how you could hit this and if that is legit.
>
> Alright, i will drop this check since it is causing a lot more
> confusion
>
It's confusing because you check ti_msgmgr_queue_get_num_messages()
also in ti_msgmgr_last_tx_done() which doesn't make sense because the
former accounts for messages from other senders also (as you say there
could be multiple senders).
> that that is worth. we can introduce it when we finally do
> hit an issue eventually with multiple processors trying to transmit on
> the same queue manager. that is not a concern at the very immediate
> time, so we should be good to drop.
>
> please let me know if you are ok with this.
>
I am ok with whatever you assert is needed for your platform. I just
point out what I think are inconsistencies in your assumptions. I'll
pick the next revision however it is.
Thanks.
Powered by blists - more mailing lists