lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ca099c0833dc79f0a88edecd9fb949157eacbf46.camel@linux.intel.com>
Date:   Mon, 07 Dec 2020 18:42:07 +0000
From:   Daniele Alessandrelli <daniele.alessandrelli@...ux.intel.com>
To:     Rob Herring <robh@...nel.org>, mgross@...ux.intel.com,
        daniele.alessandrelli@...el.com
Cc:     markgross@...nel.org, arnd@...db.de, bp@...e.de,
        damien.lemoal@....com, gregkh@...uxfoundation.org, corbet@....net,
        leonard.crestez@....com, palmerdabbelt@...gle.com,
        paul.walmsley@...ive.com, peng.fan@....com, shawnguo@...nel.org,
        linux-kernel@...r.kernel.org, devicetree@...r.kernel.org,
        Jassi Brar <jassisinghbrar@...il.com>
Subject: Re: [PATCH 02/22] dt-bindings: Add bindings for Keem Bay IPC driver

Hi Rob,

Thanks for the feedback.

On Mon, 2020-12-07 at 10:01 -0600, Rob Herring wrote:
> On Tue, Dec 01, 2020 at 02:34:51PM -0800, mgross@...ux.intel.com wrote:
> > From: Daniele Alessandrelli <daniele.alessandrelli@...el.com>
> > 
> > Add DT binding documentation for the Intel Keem Bay IPC driver, which
> > enables communication between the Computing Sub-System (CSS) and the
> > Multimedia Sub-System (MSS) of the Intel Movidius SoC code named Keem
> > Bay.
> > 

[cut]

> > +
> > +description:
> > +  The Keem Bay IPC driver enables Inter-Processor Communication (IPC) with the
> > +  Visual Processor Unit (VPU) embedded in the Intel Movidius SoC code named
> > +  Keem Bay.
> 
> Sounds like a mailbox.

We did consider using the mailbox framework, but eventually decided
against it; mainly because of the following two reasons:

1. The channel concept in the Mailbox framework is different than the
   channel concept in Keem Bay IPC:

   a. My understanding is that Mailbox channels are meant to be SW
      representation of physical HW channels, while in Keem Bay IPC
      channels are software abstractions to achieve communication
      multiplexing over a single HW link

   b. Additionally, Keem Bay IPC has two different classes of channels 
      (high-speed channels and general-purpose channels) that need to
      access the same HW link with different priorities.

2. The blocking / non-blocking TX behavior of mailbox channels is
   defined at channel creation time (by the tx_block value of the
   mailbox client passed to mbox_request_channel(); my understanding 
   is that the tx_block value cannot be modified after the channel is
   created), while in Keem Bay IPC the same channel can be used for
   both blocking and non-blocking TX (behavior is controlled by the
   timeout argument passed to keembay_ipc_send()).

Having said that, I guess that it could be possible to create a Mailbox
driver implementing the core communication mechanism used by the Keem
Bay IPC and then build our API around it (basically having two
drivers). But I'm not sure that would make the code simpler or easier
to maintain. Any thoughts on this?


>  
> 
> What's the relationship between this and the xlink thing?
> 

xLink internally uses Keem Bay IPC to communicate with the VPU.
Basically, Keem Bay IPC is the lowest layer of the xLink stack.




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ