[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DM4PR12MB5769C60EFD807376CE09DC3FC3A89@DM4PR12MB5769.namprd12.prod.outlook.com>
Date: Fri, 24 Feb 2023 14:16:27 +0000
From: Krishna Yarlagadda <kyarlagadda@...dia.com>
To: Mark Brown <broonie@...nel.org>
CC: "robh+dt@...nel.org" <robh+dt@...nel.org>,
"peterhuewe@....de" <peterhuewe@....de>,
"jgg@...pe.ca" <jgg@...pe.ca>,
"jarkko@...nel.org" <jarkko@...nel.org>,
"krzysztof.kozlowski+dt@...aro.org"
<krzysztof.kozlowski+dt@...aro.org>,
"linux-spi@...r.kernel.org" <linux-spi@...r.kernel.org>,
"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
"linux-integrity@...r.kernel.org" <linux-integrity@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"thierry.reding@...il.com" <thierry.reding@...il.com>,
Jonathan Hunter <jonathanh@...dia.com>,
Sowjanya Komatineni <skomatineni@...dia.com>,
Laxman Dewangan <ldewangan@...dia.com>
Subject: RE: [Patch V3 1/3] tpm_tis-spi: Support hardware wait polling
> -----Original Message-----
> From: Mark Brown <broonie@...nel.org>
> Sent: 24 February 2023 00:13
> To: Krishna Yarlagadda <kyarlagadda@...dia.com>
> Cc: robh+dt@...nel.org; peterhuewe@....de; jgg@...pe.ca;
> jarkko@...nel.org; krzysztof.kozlowski+dt@...aro.org; linux-
> spi@...r.kernel.org; linux-tegra@...r.kernel.org; linux-
> integrity@...r.kernel.org; linux-kernel@...r.kernel.org;
> thierry.reding@...il.com; Jonathan Hunter <jonathanh@...dia.com>;
> Sowjanya Komatineni <skomatineni@...dia.com>; Laxman Dewangan
> <ldewangan@...dia.com>
> Subject: Re: [Patch V3 1/3] tpm_tis-spi: Support hardware wait polling
>
> On Thu, Feb 23, 2023 at 06:41:43PM +0000, Krishna Yarlagadda wrote:
>
> > > > + spi_bus_lock(phy->spi_device->master);
> > > > +
> > > > + while (len) {
>
> > > Why?
>
> > TPM supports max 64B in single transaction. Loop to send rest of it.
>
> No, why is there a bus lock?
Bus lock to avoid other clients to be accessed between TPM transfers.
>
> > > > + spi_xfer[0].tx_buf = phy->iobuf;
> > > > + spi_xfer[0].len = 1;
> > > > + spi_message_add_tail(&spi_xfer[0], &m);
> > > > +
> > > > + spi_xfer[1].tx_buf = phy->iobuf + 1;
> > > > + spi_xfer[1].len = 3;
> > > > + spi_message_add_tail(&spi_xfer[1], &m);
>
> > > Why would we make these two separate transfers?
>
> > Tegra QSPI combined sequence requires cmd, addr and data in different
> > transfers. This eliminates need for additional flag for combined sequence.
>
> That needs some documentation, and we might need a flag to ensure the
> core doesn't coalesce the transfers.
Will add comment at top of the function. Bus lock should avoid coalesce of
transfer of single message from others.
KY
Powered by blists - more mailing lists