lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 30 Aug 2019 08:28:01 +0000
From:   Peng Fan <peng.fan@....com>
To:     Jassi Brar <jassisinghbrar@...il.com>
CC:     "robh+dt@...nel.org" <robh+dt@...nel.org>,
        "mark.rutland@....com" <mark.rutland@....com>,
        "sudeep.holla@....com" <sudeep.holla@....com>,
        "andre.przywara@....com" <andre.przywara@....com>,
        "f.fainelli@...il.com" <f.fainelli@...il.com>,
        "devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        dl-linux-imx <linux-imx@....com>
Subject: RE: [PATCH v5 1/2] dt-bindings: mailbox: add binding doc for the ARM
 SMC/HVC mailbox


> Subject: Re: [PATCH v5 1/2] dt-bindings: mailbox: add binding doc for the ARM
> SMC/HVC mailbox
> 
> On Fri, Aug 30, 2019 at 3:07 AM Peng Fan <peng.fan@....com> wrote:
> >
> > > Subject: Re: [PATCH v5 1/2] dt-bindings: mailbox: add binding doc
> > > for the ARM SMC/HVC mailbox
> > >
> > > On Fri, Aug 30, 2019 at 2:37 AM Peng Fan <peng.fan@....com> wrote:
> > > >
> > > > Hi Jassi,
> > > >
> > > > > Subject: Re: [PATCH v5 1/2] dt-bindings: mailbox: add binding
> > > > > doc for the ARM SMC/HVC mailbox
> > > > >
> > > > > On Fri, Aug 30, 2019 at 1:28 AM Peng Fan <peng.fan@....com> wrote:
> > > > >
> > > > > > > > +examples:
> > > > > > > > +  - |
> > > > > > > > +    sram@...000 {
> > > > > > > > +      compatible = "mmio-sram";
> > > > > > > > +      reg = <0x0 0x93f000 0x0 0x1000>;
> > > > > > > > +      #address-cells = <1>;
> > > > > > > > +      #size-cells = <1>;
> > > > > > > > +      ranges = <0 0x0 0x93f000 0x1000>;
> > > > > > > > +
> > > > > > > > +      cpu_scp_lpri: scp-shmem@0 {
> > > > > > > > +        compatible = "arm,scmi-shmem";
> > > > > > > > +        reg = <0x0 0x200>;
> > > > > > > > +      };
> > > > > > > > +
> > > > > > > > +      cpu_scp_hpri: scp-shmem@200 {
> > > > > > > > +        compatible = "arm,scmi-shmem";
> > > > > > > > +        reg = <0x200 0x200>;
> > > > > > > > +      };
> > > > > > > > +    };
> > > > > > > > +
> > > > > > > > +    firmware {
> > > > > > > > +      smc_mbox: mailbox {
> > > > > > > > +        #mbox-cells = <1>;
> > > > > > > > +        compatible = "arm,smc-mbox";
> > > > > > > > +        method = "smc";
> > > > > > > > +        arm,num-chans = <0x2>;
> > > > > > > > +        transports = "mem";
> > > > > > > > +        /* Optional */
> > > > > > > > +        arm,func-ids = <0xc20000fe>, <0xc20000ff>;
> > > > > > > >
> > > > > > > SMC/HVC is synchronously(block) running in "secure mode",
> > > > > > > i.e, there can only be one instance running platform wide. Right?
> > > > > >
> > > > > > I think there could be channel for TEE, and channel for Linux.
> > > > > > For virtualization case, there could be dedicated channel for each
> VM.
> > > > > >
> > > > > I am talking from Linux pov. Functions 0xfe and 0xff above,
> > > > > can't both be active at the same time, right?
> > > >
> > > > If I get your point correctly,
> > > > On UP, both could not be active. On SMP, tx/rx could be both
> > > > active, anyway this depends on secure firmware and Linux firmware
> design.
> > > >
> > > > Do you have any suggestions about arm,func-ids here?
> > > >
> > > I was thinking if this is just an instruction, why can't each
> > > channel be represented as a controller, i.e, have exactly one func-id per
> controller node.
> > > Define as many controllers as you need channels ?
> >
> > I am ok, this could make driver code simpler. Something as below?
> >
> >     smc_tx_mbox: tx_mbox {
> >       #mbox-cells = <0>;
> >       compatible = "arm,smc-mbox";
> >       method = "smc";
> >       transports = "mem";
> >       arm,func-id = <0xc20000fe>;
> >     };
> >
> >     smc_rx_mbox: rx_mbox {
> >       #mbox-cells = <0>;
> >       compatible = "arm,smc-mbox";
> >       method = "smc";
> >       transports = "mem";
> >       arm,func-id = <0xc20000ff>;
> >     }
> >
> >     firmware {
> >       scmi {
> >         compatible = "arm,scmi";
> >         mboxes = <&smc_tx_mbox>, <&smc_rx_mbox 1>;
> >         mbox-names = "tx", "rx";
> >         shmem = <&cpu_scp_lpri>, <&cpu_scp_hpri>;
> >       };
> >     };
> >
> Yes, the channel part is good.
> But I am not convinced by the need to have SCMI specific "transport" mode.

SCMI spec only support shared memory message. However to make this driver
generic, need to take care of message using ARM registers.

If using shared memory message, the call will be
invoke_smc_mbox_fn(function_id, chan_id, 0, 0, 0, 0, 0, 0);
If using ARM registers to transfer message, the call will be
invoke_smc_mbox_fn(cmd->a0, cmd->a1, cmd->a2, cmd->a3, 
cmd->a4, cmd->a5, cmd->a6, cmd->a7);

So I added "transports" mode.

Code as below:
        if (chan_data->flags & ARM_SMC_MBOX_MEM_TRANS) {
                if (chan_data->function_id != UINT_MAX)
                        function_id = chan_data->function_id;
                else
                        function_id = cmd->a0;
                chan_id = chan_data->chan_id;
                ret = invoke_smc_mbox_fn(function_id, chan_id, 0, 0, 0, 0,
                                         0, 0);
        } else {
                ret = invoke_smc_mbox_fn(cmd->a0, cmd->a1, cmd->a2, cmd->a3,
                                         cmd->a4, cmd->a5, cmd->a6, cmd->a7);
        }


Per Sudeep's comments in previous version, better pass chan_id
to secure firmware.
If drop the "transports" mode, I do not have a good idea how to differentiate
the two cases, reg and mem. Any suggestions?

Thanks,
Peng.


> 
> thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ