lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DM4PR12MB5769BCBCD410C75DF3EB26FBC3AC9@DM4PR12MB5769.namprd12.prod.outlook.com>
Date:   Tue, 28 Feb 2023 03:32:24 +0000
From:   Krishna Yarlagadda <kyarlagadda@...dia.com>
To:     Jarkko Sakkinen <jarkko@...nel.org>
CC:     "robh+dt@...nel.org" <robh+dt@...nel.org>,
        "broonie@...nel.org" <broonie@...nel.org>,
        "peterhuewe@....de" <peterhuewe@....de>,
        "jgg@...pe.ca" <jgg@...pe.ca>,
        "krzysztof.kozlowski+dt@...aro.org" 
        <krzysztof.kozlowski+dt@...aro.org>,
        "linux-spi@...r.kernel.org" <linux-spi@...r.kernel.org>,
        "linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
        "linux-integrity@...r.kernel.org" <linux-integrity@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "thierry.reding@...il.com" <thierry.reding@...il.com>,
        Jonathan Hunter <jonathanh@...dia.com>,
        Sowjanya Komatineni <skomatineni@...dia.com>,
        Laxman Dewangan <ldewangan@...dia.com>
Subject: RE: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling

> -----Original Message-----
> From: Jarkko Sakkinen <jarkko@...nel.org>
> Sent: 28 February 2023 08:06
> To: Krishna Yarlagadda <kyarlagadda@...dia.com>
> Cc: robh+dt@...nel.org; broonie@...nel.org; peterhuewe@....de;
> jgg@...pe.ca; krzysztof.kozlowski+dt@...aro.org; linux-spi@...r.kernel.org;
> linux-tegra@...r.kernel.org; linux-integrity@...r.kernel.org; linux-
> kernel@...r.kernel.org; thierry.reding@...il.com; Jonathan Hunter
> <jonathanh@...dia.com>; Sowjanya Komatineni
> <skomatineni@...dia.com>; Laxman Dewangan <ldewangan@...dia.com>
> Subject: Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
> 
> External email: Use caution opening links or attachments
> 
> 
> On Mon, Feb 27, 2023 at 05:37:01PM +0530, Krishna Yarlagadda wrote:
> > TPM devices raise wait signal on last addr cycle. This can be detected
> > by software driver by reading MISO line on same clock which requires
> > full duplex support. In case of half duplex controllers wait detection
> > has to be implemented in HW.
> > Support hardware wait state detection by sending entire message and let
> > controller handle flow control.
> 
> When a is started sentence with the word "support" it translates to "I'm
> too lazy to write a proper and verbose description of the implementation"
> :-)
> 
> It has some abstract ideas of the implementation, I give you that, but do
> you think anyone ever will get any value of reading that honestly? A bit
> more concrette description of the change helps e.g. when bisecting bugs.
> 
I presented why we are making the change. Will add explanation on how
it is implemented as well.

> > QSPI controller in Tegra236 & Tegra241 implement TPM wait polling.
> >
> > Signed-off-by: Krishna Yarlagadda <kyarlagadda@...dia.com>
> > ---
> >  drivers/char/tpm/tpm_tis_spi_main.c | 92
> ++++++++++++++++++++++++++++-
> >  1 file changed, 90 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/char/tpm/tpm_tis_spi_main.c
> b/drivers/char/tpm/tpm_tis_spi_main.c
> > index a0963a3e92bd..5f66448ee09e 100644
> > --- a/drivers/char/tpm/tpm_tis_spi_main.c
> > +++ b/drivers/char/tpm/tpm_tis_spi_main.c
> > @@ -71,8 +71,74 @@ static int tpm_tis_spi_flow_control(struct
> tpm_tis_spi_phy *phy,
> >       return 0;
> >  }
> >
> > -int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> > -                      u8 *in, const u8 *out)
> > +/*
> > + * Half duplex controller with support for TPM wait state detection like
> > + * Tegra241 need cmd, addr & data sent in single message to manage HW
> flow
> > + * control. Each phase sent in different transfer for controller to idenity
> > + * phase.
> > + */
> > +int tpm_tis_spi_hw_flow_transfer(struct tpm_tis_data *data, u32 addr,
> u16 len,
> > +                              u8 *in, const u8 *out)
> > +{
> > +     struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > +     struct spi_transfer spi_xfer[3];
> > +     struct spi_message m;
> > +     u8 transfer_len;
> > +     int ret;
> > +
> > +     while (len) {
> > +             transfer_len = min_t(u16, len, MAX_SPI_FRAMESIZE);
> > +
> > +             spi_message_init(&m);
> > +             phy->iobuf[0] = (in ? 0x80 : 0) | (transfer_len - 1);
> > +             phy->iobuf[1] = 0xd4;
> > +             phy->iobuf[2] = addr >> 8;
> > +             phy->iobuf[3] = addr;
> > +
> > +             memset(&spi_xfer, 0, sizeof(spi_xfer));
> > +
> > +             spi_xfer[0].tx_buf = phy->iobuf;
> > +             spi_xfer[0].len = 1;
> > +             spi_message_add_tail(&spi_xfer[0], &m);
> > +
> > +             spi_xfer[1].tx_buf = phy->iobuf + 1;
> > +             spi_xfer[1].len = 3;
> > +             spi_message_add_tail(&spi_xfer[1], &m);
> > +
> > +             if (out) {
> > +                     spi_xfer[2].tx_buf = &phy->iobuf[4];
> > +                     spi_xfer[2].rx_buf = NULL;
> > +                     memcpy(&phy->iobuf[4], out, transfer_len);
> > +                     out += transfer_len;
> > +             }
> > +
> > +             if (in) {
> > +                     spi_xfer[2].tx_buf = NULL;
> > +                     spi_xfer[2].rx_buf = &phy->iobuf[4];
> > +             }
> > +
> > +             spi_xfer[2].len = transfer_len;
> > +             spi_message_add_tail(&spi_xfer[2], &m);
> > +
> > +             reinit_completion(&phy->ready);
> > +
> > +             ret = spi_sync_locked(phy->spi_device, &m);
> > +             if (ret < 0)
> > +                     return ret;
> > +
> > +             if (in) {
> > +                     memcpy(in, &phy->iobuf[4], transfer_len);
> > +                     in += transfer_len;
> > +             }
> > +
> > +             len -= transfer_len;
> > +     }
> > +
> > +     return ret;
> > +}
> > +
> > +int tpm_tis_spi_sw_flow_transfer(struct tpm_tis_data *data, u32 addr,
> u16 len,
> > +                              u8 *in, const u8 *out)
> >  {
> >       struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> >       int ret = 0;
> > @@ -140,6 +206,28 @@ int tpm_tis_spi_transfer(struct tpm_tis_data
> *data, u32 addr, u16 len,
> >       return ret;
> >  }
> >
> > +int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> > +                      u8 *in, const u8 *out)
> > +{
> > +     struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > +     struct spi_controller *ctlr = phy->spi_device->controller;
> > +
> > +     /*
> > +      * TPM flow control over SPI requires full duplex support.
> > +      * Send entire message to a half duplex controller to handle
> > +      * wait polling in controller.
> > +      * Set TPM HW flow control flag..
> > +      */
> > +     if (ctlr->flags & SPI_CONTROLLER_HALF_DUPLEX) {
> > +             phy->spi_device->mode |= SPI_TPM_HW_FLOW;
> > +             return tpm_tis_spi_hw_flow_transfer(data, addr, len, in,
> > +                                                 out);
> > +     } else {
> > +             return tpm_tis_spi_sw_flow_transfer(data, addr, len, in,
> > +                                                 out);
> > +     }
> > +}
> > +
> >  static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr,
> >                                 u16 len, u8 *result, enum tpm_tis_io_mode io_mode)
> >  {
> > --
> > 2.17.1
> >
> 
> Looking pretty good but do you really want to export
> tpm_tis_spi_{hw,sw}_flow_transfer?
> 
> BR, Jarkko
No need to export tpm_tis_spi_{hw,sw}_flow_transfer as well.
I will update this in next version.

KY

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ