lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJe_ZheeEe0xEj733pxr4q=-YzxKHgRM4MMPDJNHa9uRC9Q2zA@mail.gmail.com>
Date:   Wed, 5 Oct 2016 20:13:32 +0530
From:   Jassi Brar <jaswinder.singh@...aro.org>
To:     Horng-Shyang Liao <hs.liao@...iatek.com>
Cc:     CK Hu <ck.hu@...iatek.com>, Daniel Kurtz <djkurtz@...omium.org>,
        Monica Wang <monica.wang@...iatek.com>,
        Jiaguang Zhang <jiaguang.zhang@...iatek.com>,
        Nicolas Boichat <drinkcat@...omium.org>,
        Jassi Brar <jassisinghbrar@...il.com>,
        cawa cheng <cawa.cheng@...iatek.com>,
        Bibby Hsieh <bibby.hsieh@...iatek.com>,
        YT Shen <yt.shen@...iatek.com>,
        Damon Chu <damon.chu@...iatek.com>,
        Devicetree List <devicetree@...r.kernel.org>,
        Sascha Hauer <kernel@...gutronix.de>,
        Daoyuan Huang <daoyuan.huang@...iatek.com>,
        Sascha Hauer <s.hauer@...gutronix.de>,
        Glory Hung <glory.hung@...iatek.com>,
        Rob Herring <robh+dt@...nel.org>,
        linux-mediatek@...ts.infradead.org,
        Matthias Brugger <matthias.bgg@...il.com>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        srv_heupstream@...iatek.com,
        Josh-YC Liu <josh-yc.liu@...iatek.com>,
        lkml <linux-kernel@...r.kernel.org>,
        Dennis-YC Hsieh <dennis-yc.hsieh@...iatek.com>,
        Philipp Zabel <p.zabel@...gutronix.de>
Subject: Re: [PATCH v14 2/4] CMDQ: Mediatek CMDQ driver

On 5 October 2016 at 18:01, Horng-Shyang Liao <hs.liao@...iatek.com> wrote:
> On Wed, 2016-10-05 at 09:07 +0530, Jassi Brar wrote:
>> On 5 October 2016 at 08:24, Horng-Shyang Liao <hs.liao@...iatek.com> wrote:
>> > On Fri, 2016-09-30 at 17:47 +0800, Horng-Shyang Liao wrote:
>> >> On Fri, 2016-09-30 at 17:11 +0800, CK Hu wrote:
>>
>> >
>> > After I trace mailbox driver, I realize that CMDQ driver cannot use
>> > tx_done.
>> >
>> > CMDQ clients will flush many tasks into CMDQ driver, and then CMDQ
>> > driver will apply these tasks into GCE HW "immediately". These tasks,
>> > which are queued in GCE HW, may not execute immediately since they
>> > may need to wait event(s), e.g. vsync.
>> >
>> > However, in mailbox driver, mailbox uses a software buffer to queue
>> > sent messages. It only sends next message until previous message is
>> > done. This cannot fulfill CMDQ's requirement.
>> >
>> I understand
>>  a) GCE HW can internally queue many tasks in some 'FIFO'
>>  b) Execution of some task may have to wait until some external event
>> occurs (like vsync)
>>  c) GCE does not generate irq/flag for each task executed (?)
>>
>> If so, may be your tx_done should return 'true' so long as the GCE HW
>> can accept tasks in its 'FIFO'. For mailbox api, any task that is
>> queued on GCE, is assumed to be transmitted.
>>
>> > Quote some code from mailbox driver. Please notice "active_req" part.
>> >
>> > static void msg_submit(struct mbox_chan *chan)
>> > {
>> >         ...
>> >         if (!chan->msg_count || chan->active_req)
>> >                 goto exit;
>> >         ...
>> >         err = chan->mbox->ops->send_data(chan, data);
>> >         if (!err) {
>> >                 chan->active_req = data;
>> >                 chan->msg_count--;
>> >         }
>> >         ...
>> > }
>> >
>> > static void tx_tick(struct mbox_chan *chan, int r)
>> > {
>> >         ...
>> >         spin_lock_irqsave(&chan->lock, flags);
>> >         mssg = chan->active_req;
>> >         chan->active_req = NULL;
>> >         spin_unlock_irqrestore(&chan->lock, flags);
>> >         ...
>> > }
>> >
>> > Current workable CMDQ driver uses mbox_client_txdone() to prevent
>> > this issue, and then uses self callback functions to handle done tasks.
>> >
>> > int cmdq_task_flush_async(struct cmdq_client *client, struct cmdq_task
>> > *task, cmdq_async_flush_cb cb, void *data)
>> > {
>> >         ...
>> >         mbox_send_message(client->chan, task);
>> >         /* We can send next task immediately, so just call txdone. */
>> >         mbox_client_txdone(client->chan, 0);
>> >         ...
>> > }
>> >
>> > Another solution is to use rx_callback; i.e. CMDQ mailbox controller
>> > call mbox_chan_received_data() when CMDQ task is done. But, this may
>> > violate the design of mailbox. What do you think?
>> >
>> If my point (c) above does not hold, maybe look at implementing
>> tx_done() callback and submit next task from the callback of last
>> done.
>
>
> Hi Jassi,
>
> For point (c), GCE irq means 1~n tasks done or
> 0~n tasks done + 1 task error.
> In irq, we can know which tasks are done by register and GCE pc.
>
> As I mentioned before, we cannot submit next task after previous task
> call tx_done. We need to submit multiple tasks to GCE HW immediately
> and queue them in GCE HW.

> Let me explain this requirement by mouse
> cursor example. User may move mouse quickly between two vsync, so DRM
> may update display registers frequently. For CMDQ, that means many tasks
> are flushed into CMDQ driver, and CMDQ driver needs to process all of
> them in next vblank. Therefore, we cannot block any CMDQ task in SW
> buffer.
>
We are interested only in the current position of cursor and not its
trail. Also the current position should be updated at next vsync (and
not the one after it).
Going by this example, if the GCE HW can take in 'N' tasks at a time,
then the N+1th submission should shift out (drop) the 1st task queued.
So that at any time GCE HW has only the latest N tasks. Right?

 If yes, maybe you don't need to care about tx-done and simply keep
shoving tasks as you generate them.

 If no, maybe your client driver need to emulate such a circular
buffer where oldest task is overwritten by newest submission. And you
submit the circular buffer (most relevant tasks) at one go to the GCE
HW.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ