lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c628b545-f883-3c2e-cabf-944f6a313bf2@molgen.mpg.de>
Date:   Fri, 29 May 2020 10:33:28 +0200
From:   Paul Menzel <pmenzel@...gen.mpg.de>
To:     Douglas Anderson <dianders@...omium.org>,
        Peter Huewe <peterhuewe@....de>,
        Jarkko Sakkinen <jarkko.sakkinen@...ux.intel.com>
Cc:     Andrey Pronin <apronin@...omium.org>,
        Stephen Boyd <swboyd@...omium.org>,
        Arnd Bergmann <arnd@...db.de>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Jason Gunthorpe <jgg@...pe.ca>,
        linux-integrity@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] tpm_tis_spi: Don't send anything during flow control

Dear Douglas,


Thank you for the patch.

Am 29.05.20 um 00:19 schrieb Douglas Anderson:
> During flow control we are just reading from the TPM, yet our spi_xfer
> has the tx_buf and rx_buf both non-NULL which means we're requesting a
> full duplex transfer.
> 
> SPI is always somewhat of a full duplex protocol anyway and in theory
> the other side shouldn't really be looking at what we're sending it
> during flow control, but it's still a bit ugly to be sending some
> "random" data when we shouldn't.
> 
> The default tpm_tis_spi_flow_control() tries to address this by
> setting 'phy->iobuf[0] = 0'.  This partially avoids the problem of
> sending "random" data, but since our tx_buf and rx_buf both point to
> the same place I believe there is the potential of us sending the
> TPM's previous byte back to it if we hit the retry loop.
> 
> Another flow control implementation, cr50_spi_flow_control(), doesn't
> address this at all.
> 
> Let's clean this up and just make the tx_buf NULL before we call
> flow_control().  Not only does this ensure that we're not sending any
> "random" bytes but it also possibly could make the SPI controller
> behave in a slightly more optimal way.
> 
> NOTE: no actual observed problems are fixed by this patch--it's was
> just made based on code inspection.

s/it's was/it was/

Were you able to test this? Maybe in the “Chromebook QA arsenal”? Are
you already running it in production on Google Chrome OS devices?

> Signed-off-by: Douglas Anderson <dianders@...omium.org>
> ---
> 
>   drivers/char/tpm/tpm_tis_spi_main.c | 9 ++++-----
>   1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/char/tpm/tpm_tis_spi_main.c b/drivers/char/tpm/tpm_tis_spi_main.c
> index d96755935529..8d2c581a93c6 100644
> --- a/drivers/char/tpm/tpm_tis_spi_main.c
> +++ b/drivers/char/tpm/tpm_tis_spi_main.c
> @@ -53,8 +53,6 @@ static int tpm_tis_spi_flow_control(struct tpm_tis_spi_phy *phy,
>   
>   	if ((phy->iobuf[3] & 0x01) == 0) {
>   		// handle SPI wait states
> -		phy->iobuf[0] = 0;
> -
>   		for (i = 0; i < TPM_RETRY; i++) {
>   			spi_xfer->len = 1;
>   			spi_message_init(&m);
> @@ -104,6 +102,8 @@ int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
>   		if (ret < 0)
>   			goto exit;
>   
> +		/* Flow control transfers are receive only */
> +		spi_xfer.tx_buf = NULL;
>   		ret = phy->flow_control(phy, &spi_xfer);
>   		if (ret < 0)
>   			goto exit;
> @@ -113,9 +113,8 @@ int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
>   		spi_xfer.delay.value = 5;
>   		spi_xfer.delay.unit = SPI_DELAY_UNIT_USECS;
>   
> -		if (in) {
> -			spi_xfer.tx_buf = NULL;
> -		} else if (out) {
> +		if (out) {
> +			spi_xfer.tx_buf = phy->iobuf;
>   			spi_xfer.rx_buf = NULL;
>   			memcpy(phy->iobuf, out, transfer_len);
>   			out += transfer_len;

Reviewed-by: Paul Menzel <pmenzel@...gen.mpg.de>


Kind regards,

Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ