[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <enfduqgdgrdpfw73suydlbmu3mopk545vwrpoat2chkop375np@o3fxh5k7wc76>
Date: Wed, 19 Apr 2023 20:11:54 -0700
From: Jerry Snitselaar <jsnitsel@...hat.com>
To: Krishna Yarlagadda <kyarlagadda@...dia.com>
Cc: robh+dt@...nel.org, broonie@...nel.org, peterhuewe@....de,
jgg@...pe.ca, jarkko@...nel.org, krzysztof.kozlowski+dt@...aro.org,
linux-spi@...r.kernel.org, linux-tegra@...r.kernel.org,
linux-integrity@...r.kernel.org, linux-kernel@...r.kernel.org,
thierry.reding@...il.com, jonathanh@...dia.com,
skomatineni@...dia.com, ldewangan@...dia.com
Subject: Re: [Patch V9 2/3] tpm_tis-spi: Add hardware wait polling
On Wed, Apr 19, 2023 at 07:32:40PM -0700, Jerry Snitselaar wrote:
> On Sun, Mar 26, 2023 at 12:04:08AM +0530, Krishna Yarlagadda wrote:
> > TPM devices may insert wait state on last clock cycle of ADDR phase.
> > For SPI controllers that support full-duplex transfers, this can be
> > detected using software by reading the MISO line. For SPI controllers
> > that only support half-duplex transfers, such as the Tegra QSPI, it is
> > not possible to detect the wait signal from software. The QSPI
> > controller in Tegra234 and Tegra241 implement hardware detection of the
> > wait signal which can be enabled in the controller for TPM devices.
> >
> > The current TPM TIS driver only supports software detection of the wait
> > signal. To support SPI controllers that use hardware to detect the wait
> > signal, add the function tpm_tis_spi_hw_flow_transfer() and move the
> > existing code for software based detection into a function called
> > tpm_tis_spi_sw_flow_transfer(). SPI controllers that only support
> > half-duplex transfers will always call tpm_tis_spi_hw_flow_transfer()
> > because they cannot support software based detection. The bit
> > SPI_TPM_HW_FLOW is set to indicate to the SPI controller that hardware
> > detection is required and it is the responsibility of the SPI controller
> > driver to determine if this is supported or not.
> >
> > For hardware flow control, CMD-ADDR-DATA messages are combined into a
> > single message where as for software flow control exiting method of
> > CMD-ADDR in a message and DATA in another is followed.
> >
> > Signed-off-by: Krishna Yarlagadda <kyarlagadda@...dia.com>
> > ---
> > drivers/char/tpm/tpm_tis_spi_main.c | 91 ++++++++++++++++++++++++++++-
> > 1 file changed, 89 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/char/tpm/tpm_tis_spi_main.c b/drivers/char/tpm/tpm_tis_spi_main.c
> > index a0963a3e92bd..db9afd0b83da 100644
> > --- a/drivers/char/tpm/tpm_tis_spi_main.c
> > +++ b/drivers/char/tpm/tpm_tis_spi_main.c
> > @@ -71,8 +71,74 @@ static int tpm_tis_spi_flow_control(struct tpm_tis_spi_phy *phy,
> > return 0;
> > }
> >
> > -int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> > - u8 *in, const u8 *out)
> > +/*
> > + * Half duplex controller with support for TPM wait state detection like
> > + * Tegra QSPI need CMD, ADDR & DATA sent in single message to manage HW flow
> > + * control. Each phase sent in different transfer for controller to idenity
> > + * phase.
> > + */
> > +static int tpm_tis_spi_transfer_half(struct tpm_tis_data *data, u32 addr,
> > + u16 len, u8 *in, const u8 *out)
> > +{
> > + struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > + struct spi_transfer spi_xfer[3];
> > + struct spi_message m;
> > + u8 transfer_len;
> > + int ret;
> > +
> > + while (len) {
> > + transfer_len = min_t(u16, len, MAX_SPI_FRAMESIZE);
> > +
> > + spi_message_init(&m);
> > + phy->iobuf[0] = (in ? 0x80 : 0) | (transfer_len - 1);
> > + phy->iobuf[1] = 0xd4;
> > + phy->iobuf[2] = addr >> 8;
> > + phy->iobuf[3] = addr;
>
> I haven't looked at much TPM code in the past couple of years, but
> perhaps some defines instead of magic numbers here? 0x80 is the rw bit,
> and 0xd4 the transaction offset?
>
> > +
> > + memset(&spi_xfer, 0, sizeof(spi_xfer));
> > +
> > + spi_xfer[0].tx_buf = phy->iobuf;
> > + spi_xfer[0].len = 1;
> > + spi_message_add_tail(&spi_xfer[0], &m);
> > +
> > + spi_xfer[1].tx_buf = phy->iobuf + 1;
> > + spi_xfer[1].len = 3;
> > + spi_message_add_tail(&spi_xfer[1], &m);
> > +
> > + if (out) {
> > + spi_xfer[2].tx_buf = &phy->iobuf[4];
> > + spi_xfer[2].rx_buf = NULL;
> > + memcpy(&phy->iobuf[4], out, transfer_len);
> > + out += transfer_len;
> > + }
> > +
> > + if (in) {
> > + spi_xfer[2].tx_buf = NULL;
> > + spi_xfer[2].rx_buf = &phy->iobuf[4];
> > + }
> > +
> > + spi_xfer[2].len = transfer_len;
> > + spi_message_add_tail(&spi_xfer[2], &m);
> > +
> > + reinit_completion(&phy->ready);
> > +
> > + ret = spi_sync_locked(phy->spi_device, &m);
> > + if (ret < 0)
> > + return ret;
> > +
> > + if (in) {
> > + memcpy(in, &phy->iobuf[4], transfer_len);
> > + in += transfer_len;
> > + }
> > +
> > + len -= transfer_len;
> > + }
> > +
> > + return ret;
> > +}
>
> Does tpm_tis_spi_transfer_half not need to lock the bus? The doc comments for spi_sync_locked
> state:
>
> This call should be used by drivers that require exclusive access to the
> SPI bus. It has to be preceded by a spi_bus_lock call. The SPI bus must
> be released by a spi_bus_unlock call when the exclusive access is over.
>
> If that isn't the case should it be using spi_sync instead of spi_sync_locked?
>
> Regards,
> Jerry
b4 mbox -c to the rescue. I found the earlier discussion with Mark about
the lock, so I guess the question is just should this call spi_sync
instead of spi_sync_locked then?
The magic numbers is a minor nit, and can probably be cleaned up
separately since the full duplex code was already doing the same
thing. The only other nit is just the older tcg spec being referenced
in patch 1.
Regards,
Jerry
>
> > +
> > +static int tpm_tis_spi_transfer_full(struct tpm_tis_data *data, u32 addr,
> > + u16 len, u8 *in, const u8 *out)
> > {
> > struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > int ret = 0;
> > @@ -140,6 +206,24 @@ int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> > return ret;
> > }
> >
> > +int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> > + u8 *in, const u8 *out)
> > +{
> > + struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > + struct spi_controller *ctlr = phy->spi_device->controller;
> > +
> > + /*
> > + * TPM flow control over SPI requires full duplex support.
> > + * Send entire message to a half duplex controller to handle
> > + * wait polling in controller.
> > + * Set TPM HW flow control flag..
> > + */
> > + if (ctlr->flags & SPI_CONTROLLER_HALF_DUPLEX)
> > + return tpm_tis_spi_transfer_half(data, addr, len, in, out);
> > + else
> > + return tpm_tis_spi_transfer_full(data, addr, len, in, out);
> > +}
> > +
> > static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr,
> > u16 len, u8 *result, enum tpm_tis_io_mode io_mode)
> > {
> > @@ -181,6 +265,9 @@ static int tpm_tis_spi_probe(struct spi_device *dev)
> >
> > phy->flow_control = tpm_tis_spi_flow_control;
> >
> > + if (dev->controller->flags & SPI_CONTROLLER_HALF_DUPLEX)
> > + dev->mode |= SPI_TPM_HW_FLOW;
> > +
> > /* If the SPI device has an IRQ then use that */
> > if (dev->irq > 0)
> > irq = dev->irq;
> > --
> > 2.17.1
> >
>
Powered by blists - more mailing lists