lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9D863536-33C8-4103-A553-64C95FF94FC4@arm.com>
Date:   Wed, 7 Aug 2019 11:11:19 +0000
From:   Tushar Khandelwal <Tushar.Khandelwal@....com>
To:     Sudeep Holla <Sudeep.Holla@....com>,
        Jassi Brar <jassisinghbrar@...il.com>
CC:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "tushar.2nov@...il.com" <tushar.2nov@...il.com>,
        "morten_bp@...e.dk" <morten_bp@...e.dk>, nd <nd@....com>,
        Morten Petersen <Morten.Petersen@....com>,
        Rob Herring <robh+dt@...nel.org>,
        Mark Rutland <Mark.Rutland@....com>,
        Devicetree List <devicetree@...r.kernel.org>
Subject: Re: [PATCH 1/4] mailbox: arm_mhuv2: add device tree binding
 documentation



On 02/08/2019, 11:54, "Sudeep Holla" <sudeep.holla@....com> wrote:

    On Thu, Jul 25, 2019 at 12:49:58AM -0500, Jassi Brar wrote:
    > On Sun, Jul 21, 2019 at 4:58 PM Jassi Brar <jassisinghbrar@...il.com> wrote:
    > >

    [...]

    > > If the mhuv2 instance implements, say, 3 channel windows between
    > > sender (linux) and receiver (firmware), and Linux runs two protocols
    > > each requiring 1 and 2-word sized messages respectively. The hardware
    > > supports that by assigning windows [0] and [1,2] to each protocol.
    > > However, I don't think the driver can support that. Or does it?
    > >
    > Thinking about it, IMO, the mbox-cell should carry a 128 (4x32) bit
    > mask specifying the set of windows (corresponding to the bits set in
    > the mask) associated with the channel.
    > And the controller driver should see any channel as associated with
    > variable number of windows 'N', where N is [0,124]
    >
    > mhu_client1: proto1@...00000 {
    >        .....
    >        mboxes = <&mbox 0x0 0x0 0x0 0x1>
    > }
    >
    > mhu_client2: proto2@...00000 {
    >        .....
    >        mboxes = <&mbox 0x0 0x0 0x0 0x6>
    > }
    >

    This still doesn't address the overhead I mentioned in my arm_mhu_v1
    series.

    As per you suggestion, we will have one channel with all possible
    bit mask value to specify the window. Let's imagine that 2 protocols
    share the same channel, then the requests are serialised.
    E.g. if bits 0 and 1 are allocated for say protocol#1 and bits 2 and 3
    for protocol#2.

At a given time only one protocol can be used by a client. No mix-match
of protocols are handled by the driver currently. Also its not possible to address all
possible scenarios offered by the IP. That's why the current driver design is
based on the implementation in the existing platforms.

    Further protocol#1 has higher latency requirements like sched-governor
    DVFS and there are 3-4 pending requests on protocol#2, then the incoming
    requests for protocol#1 is blocked.

    This is definitely overhead and I have seen lots of issue around this
    and hence I was requesting that we need to create individual channels
    for each of these. Having abstraction on top to multiplex or arbitrate
    won't help.

Also the (mbox-cells) approach will not allow us to differentiate between
single-word and doorbell which is required to make the controller driver
aware of the data expected whether it's a pointer to a location or in
register itself.

--Tushar
    --
    Regards,
    Sudeep


IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ