lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 14 Oct 2020 15:49:59 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Rob Herring <robh@...nel.org>
Cc:     Jisheng Zhang <Jisheng.Zhang@...aptics.com>,
        Kishon Vijay Abraham I <kishon@...com>,
        Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
        Bjorn Helgaas <bhelgaas@...gle.com>,
        Jingoo Han <jingoohan1@...il.com>,
        Gustavo Pimentel <gustavo.pimentel@...opsys.com>,
        PCI <linux-pci@...r.kernel.org>,
        linux-omap <linux-omap@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH v7 2/2] PCI: dwc: Fix MSI page leakage in suspend/resume

On 2020-10-14 15:15, Rob Herring wrote:
> On Mon, Oct 12, 2020 at 6:37 AM Robin Murphy <robin.murphy@....com> wrote:
>>
>> On 2020-10-09 08:55, Jisheng Zhang wrote:
>>> Currently, dw_pcie_msi_init() allocates and maps page for msi, then
>>> program the PCIE_MSI_ADDR_LO and PCIE_MSI_ADDR_HI. The Root Complex
>>> may lose power during suspend-to-RAM, so when we resume, we want to
>>> redo the latter but not the former. If designware based driver (for
>>> example, pcie-tegra194.c) calls dw_pcie_msi_init() in resume path, the
>>> msi page will be leaked.
>>>
>>> As pointed out by Rob and Ard, there's no need to allocate a page for
>>> the MSI address, we could use an address in the driver data.
>>>
>>> To avoid map the MSI msg again during resume, we move the map MSI msg
>>> from dw_pcie_msi_init() to dw_pcie_host_init().
>>
>> You should move the unmap there as well. As soon as you know what the
>> relevant address would be if you *were* to do DMA to this location, then
>> the exercise is complete. Leaving it mapped for the lifetime of the
>> device in order to do not-DMA to it seems questionable (and represents
>> technically incorrect API usage without at least a sync_for_cpu call
>> before any other access to the data).
>>
>> Another point of note is that using streaming DMA mappings at all is a
>> bit fragile (regardless of this change). If the host controller itself
>> has a limited DMA mask relative to physical memory (which integrators
>> still seem to keep doing...) then you could end up punching your MSI
>> hole right in the middle of the SWIOTLB bounce buffer, where it's then
>> almost *guaranteed* to interfere with real DMA :(
> 
> Couldn't that happen with the current code too? alloc_page() isn't
> guaranteed to be DMA'able, right?

Indeed that's what I meant by "regardless of this change".

>> If no DWC users have that problem and the current code is working well
>> enough, then I see little reason not to make this partucular change to
>> tidy up the implementation, just bear in mind that there's always the
>> possibility of having to come back and change it yet again in future to
>> make it more robust. I had it in mind that this trick was done with a
>> coherent DMA allocation, which would be safe from addressing problems
>> but would need to be kept around for the lifetime of the device, but
>> maybe that was a different driver :/
> 
> Well, we're wasting 4K or 64K of memory and then leaking it is the
> main reason to change it.
> 
> We just need any address that's not memory which PCI could access. We
> could possibly just take the end of (outbound) PCI memory space. Note
> that the DWC driver never sets up inbound translations, so it's all
> 1:1 mapping (though upstream could have some translation).

Right, this patch is undeniably a better implementation of the existing 
approach, I just felt it worth pointing out that that approach itself 
has fundamental flaws which may or may not be relevant to some current 
and/or future users. I know for a fact that there are platforms which 
cripple their PCIe host bridge to 32-bit physical addressing but support 
having RAM above that; I don't *think* any of the ones I know of are 
using the dw_pcie driver, but hey, how much do I know? ;)

Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ