lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <13741b28-1b5c-de55-3945-e05911e5a4e2@linux.vnet.ibm.com>
Date:   Wed, 16 Aug 2017 17:15:55 -0400
From:   Ken Goldman <kgold@...ux.vnet.ibm.com>
To:     linux-rt-users@...r.kernel.org, linux-kernel@...r.kernel.org
Cc:     tpmdd-devel@...ts.sourceforge.net
Subject: Re: [tpmdd-devel] [PATCH v2] tpm_tis: fix stall after iowrite*()s

On 8/15/2017 4:13 PM, Haris Okanovic wrote:
> ioread8() operations to TPM MMIO addresses can stall the cpu when
> immediately following a sequence of iowrite*()'s to the same region.
> 
> For example, cyclitest measures ~400us latency spikes when a non-RT
> usermode application communicates with an SPI-based TPM chip (Intel Atom
> E3940 system, PREEMPT_RT_FULL kernel). The spikes are caused by a
> stalling ioread8() operation following a sequence of 30+ iowrite8()s to
> the same address. I believe this happens because the write sequence is
> buffered (in cpu or somewhere along the bus), and gets flushed on the
> first LOAD instruction (ioread*()) that follows.
> 
> The enclosed change appears to fix this issue: read the TPM chip's
> access register (status code) after every iowrite*() operation to
> amortize the cost of flushing data to chip across multiple instructions.

I worry a bit about "appears to fix".  It seems odd that the TPM device 
driver would be the first code to uncover this.  Can anyone confirm that 
the chipset does indeed have this bug?

I'd also like an indication of the performance penalty.  We're doing a 
lot of work to improve the performance and I worry that "do a read after 
every write" will have a performance impact.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ