lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 17 Aug 2017 15:12:01 -0500
From:   Haris Okanovic <haris.okanovic@...com>
To:     Jason Gunthorpe <jgunthorpe@...idianresearch.com>,
        Sebastian Andrzej Siewior <sebastian.siewior@...utronix.de>
Cc:     Ken Goldman <kgold@...ux.vnet.ibm.com>,
        linux-rt-users@...r.kernel.org, linux-kernel@...r.kernel.org,
        tpmdd-devel@...ts.sourceforge.net, harisokn@...il.com,
        julia.cartwright@...com, gratian.crisan@...com,
        scott.hartman@...com, chris.graf@...com, brad.mouring@...com,
        jonathan.david@...com, peterhuewe@....de, tpmdd@...horst.net,
        jarkko.sakkinen@...ux.intel.com, eric.gardiner@...com
Subject: Re: [tpmdd-devel] [PATCH v2] tpm_tis: fix stall after iowrite*()s

Neither wmb() nor mb() have any effect when substituted for 
ioread8(iobase + TPM_ACCESS(0)) in tpm_tis_flush(). I still see 300 - 
400 us spikes in cyclictest invoking my TPM chip's RNG.

-- Haris


On 08/17/2017 12:17 PM, Jason Gunthorpe wrote:
> On Thu, Aug 17, 2017 at 12:38:07PM +0200, Sebastian Andrzej Siewior wrote:
> 
>>> I worry a bit about "appears to fix".  It seems odd that the TPM device
>>> driver would be the first code to uncover this.  Can anyone confirm that the
>>> chipset does indeed have this bug?
>>
>> What Haris says makes sense. It is just not all architectures
>> accumulate/ batch writes to HW.
> 
> It doesn't seem that odd to me.. In modern Intel chipsets the physical
> LPC bus is used for very little. Maybe some flash and possibly a
> winbond super IO at worst?  Plus the TPM.
> 
> I can't confirm what Intel has done, but if writes are posted, then it
> is not a 'bug', but expected operation for a PCI/LPC bridge device to
> have an ordered queue of posted writes, and thus higher latency when
> processing reads due to ordering requirments.
> 
> Other drivers may not see it because most LPC usages would not be
> write heavy, or might use IO instructions which are not posted..
> 
> I can confirm that my ARM systems with a custom PCI-LPC bridge will
> have exactly the same problem, and that the readl is the only
> solution.
> 
> This is becuase writes to LPC are posted over PCI and will be buffered
> in the root complex, device end port and internally in the LPC
> bridge. Since they are posted there is no way for the CPU to know when
> the complete and when it would be 'low latency' to issue a read.
> 
>> So powerpc (for instance) has a sync operation after each write to HW. I
>> am wondering if we could need something like that on x86.
> 
> Even on something like PPC 'sync' is not defined to globally flush
> posted writes, and wil not help. WMB is probably similar.
> 
> Jason
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ