lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 21 Apr 2023 10:39:06 +0100
From:   Jon Hunter <jonathanh@...dia.com>
To:     Krishna Yarlagadda <kyarlagadda@...dia.com>, jsnitsel@...hat.com,
        robh+dt@...nel.org, broonie@...nel.org, peterhuewe@....de,
        jgg@...pe.ca, jarkko@...nel.org, krzysztof.kozlowski+dt@...aro.org,
        linux-spi@...r.kernel.org, linux-tegra@...r.kernel.org,
        linux-integrity@...r.kernel.org, linux-kernel@...r.kernel.org
Cc:     thierry.reding@...il.com, skomatineni@...dia.com,
        ldewangan@...dia.com
Subject: Re: [Patch V10 3/3] spi: tegra210-quad: Enable TPM wait polling


On 21/04/2023 10:13, Krishna Yarlagadda wrote:
> Trusted Platform Module requires flow control. As defined in TPM
> interface specification, client would drive MISO line at same cycle as
> last address bit on MOSI.
> Tegra234 and Tegra241 QSPI controllers have TPM wait state detection
> feature which is enabled for TPM client devices reported in SPI device
> mode bits.
> 
> Signed-off-by: Krishna Yarlagadda <kyarlagadda@...dia.com>
> ---
>   drivers/spi/spi-tegra210-quad.c | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
> 
> diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-quad.c
> index bea376acea1f..fbd14dd7be44 100644
> --- a/drivers/spi/spi-tegra210-quad.c
> +++ b/drivers/spi/spi-tegra210-quad.c
> @@ -142,6 +142,7 @@
>   
>   #define QSPI_GLOBAL_CONFIG			0X1a4
>   #define QSPI_CMB_SEQ_EN				BIT(0)
> +#define QSPI_TPM_WAIT_POLL_EN			BIT(1)
>   
>   #define QSPI_CMB_SEQ_ADDR			0x1a8
>   #define QSPI_ADDRESS_VALUE_SET(X)		(((x) & 0xFFFF) << 0)
> @@ -164,6 +165,7 @@
>   struct tegra_qspi_soc_data {
>   	bool has_dma;
>   	bool cmb_xfer_capable;
> +	bool supports_tpm;
>   	unsigned int cs_count;
>   };
>   
> @@ -1065,6 +1067,12 @@ static int tegra_qspi_combined_seq_xfer(struct tegra_qspi *tqspi,
>   
>   	/* Enable Combined sequence mode */
>   	val = tegra_qspi_readl(tqspi, QSPI_GLOBAL_CONFIG);
> +	if (spi->mode & SPI_TPM_HW_FLOW) {
> +		if (tqspi->soc_data->supports_tpm)
> +			val |= QSPI_TPM_WAIT_POLL_EN;
> +		else
> +			return -EIO;
> +	}
>   	val |= QSPI_CMB_SEQ_EN;
>   	tegra_qspi_writel(tqspi, val, QSPI_GLOBAL_CONFIG);
>   	/* Process individual transfer list */
> @@ -1196,6 +1204,8 @@ static int tegra_qspi_non_combined_seq_xfer(struct tegra_qspi *tqspi,
>   	/* Disable Combined sequence mode */
>   	val = tegra_qspi_readl(tqspi, QSPI_GLOBAL_CONFIG);
>   	val &= ~QSPI_CMB_SEQ_EN;
> +	if (tqspi->soc_data->supports_tpm)
> +		val &= ~QSPI_TPM_WAIT_POLL_EN;
>   	tegra_qspi_writel(tqspi, val, QSPI_GLOBAL_CONFIG);
>   	list_for_each_entry(transfer, &msg->transfers, transfer_list) {
>   		struct spi_transfer *xfer = transfer;
> @@ -1454,24 +1464,28 @@ static irqreturn_t tegra_qspi_isr_thread(int irq, void *context_data)
>   static struct tegra_qspi_soc_data tegra210_qspi_soc_data = {
>   	.has_dma = true,
>   	.cmb_xfer_capable = false,
> +	.supports_tpm = false,
>   	.cs_count = 1,
>   };
>   
>   static struct tegra_qspi_soc_data tegra186_qspi_soc_data = {
>   	.has_dma = true,
>   	.cmb_xfer_capable = true,
> +	.supports_tpm = false,
>   	.cs_count = 1,
>   };
>   
>   static struct tegra_qspi_soc_data tegra234_qspi_soc_data = {
>   	.has_dma = false,
>   	.cmb_xfer_capable = true,
> +	.supports_tpm = true,
>   	.cs_count = 1,
>   };
>   
>   static struct tegra_qspi_soc_data tegra241_qspi_soc_data = {
>   	.has_dma = false,
>   	.cmb_xfer_capable = true,
> +	.supports_tpm = true,
>   	.cs_count = 4,
>   };
>   


Reviewed-by: Jon Hunter <jonathanh@...dia.com>

The Tegra change looks good to me, assuming that everyone is happy with 
the other patches in the series.

Jon

-- 
nvpublic

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ