[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220525223316.388490-1-willmcvicker@google.com>
Date: Wed, 25 May 2022 22:33:16 +0000
From: Will McVicker <willmcvicker@...gle.com>
To: Jingoo Han <jingoohan1@...il.com>,
Gustavo Pimentel <gustavo.pimentel@...opsys.com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Rob Herring <robh@...nel.org>,
"Krzysztof WilczyĆski" <kw@...ux.com>,
Bjorn Helgaas <bhelgaas@...gle.com>
Cc: kernel-team@...roid.com, Vidya Sagar <vidyas@...dia.com>,
Will McVicker <willmcvicker@...gle.com>,
linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH v1 1/1] PCI: dwc: Fix MSI msi_msg dma mapping
As of commit 07940c369a6b ("PCI: dwc: Fix MSI page leakage in
suspend/resume"), the PCIe designware host driver has been using the
driver data allocation for the msi_msg dma mapping which can result in
a DMA_MAPPING_ERROR due to the DMA overflow check in
dma_direct_map_page() when the address is greater than 32-bits (reported
in [1]). The commit was trying to address a memory leak on
suspend/resume by moving the MSI mapping to dw_pcie_host_init(), but
subsequently dropped the page allocation thinking it wasn't needed.
To fix the DMA mapping issue as well as make msi_msg DMA'able, let's
switch back to allocating a 32-bit page for the msi_msg. To avoid the
suspend/resume leak, we can allocate the page in dw_pcie_host_init()
since that function shouldn't be called during suspend/resume.
[1] https://lore.kernel.org/all/Yo0soniFborDl7+C@google.com/
Signed-off-by: Will McVicker <willmcvicker@...gle.com>
---
drivers/pci/controller/dwc/pcie-designware-host.c | 14 ++++++++------
drivers/pci/controller/dwc/pcie-designware.h | 2 +-
2 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index 2fa86f32d964..3655c6f88bf1 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -267,8 +267,9 @@ static void dw_pcie_free_msi(struct pcie_port *pp)
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev;
- dma_unmap_single_attrs(dev, pp->msi_data, sizeof(pp->msi_msg),
- DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
+ dma_unmap_page(dev, pp->msi_data, PAGE_SIZE, DMA_FROM_DEVICE);
+ if (pp->msi_page)
+ __free_page(pp->msi_page);
}
}
@@ -392,12 +393,13 @@ int dw_pcie_host_init(struct pcie_port *pp)
if (ret)
dev_warn(pci->dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n");
- pp->msi_data = dma_map_single_attrs(pci->dev, &pp->msi_msg,
- sizeof(pp->msi_msg),
- DMA_FROM_DEVICE,
- DMA_ATTR_SKIP_CPU_SYNC);
+ pp->msi_page = alloc_page(GFP_DMA32);
+ pp->msi_data = dma_map_page(pci->dev, pp->msi_page, 0, PAGE_SIZE,
+ DMA_FROM_DEVICE);
if (dma_mapping_error(pci->dev, pp->msi_data)) {
dev_err(pci->dev, "Failed to map MSI data\n");
+ __free_page(pp->msi_page);
+ pp->msi_page = NULL;
pp->msi_data = 0;
goto err_free_msi;
}
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
index 7d6e9b7576be..b5f528536358 100644
--- a/drivers/pci/controller/dwc/pcie-designware.h
+++ b/drivers/pci/controller/dwc/pcie-designware.h
@@ -190,8 +190,8 @@ struct pcie_port {
int msi_irq;
struct irq_domain *irq_domain;
struct irq_domain *msi_domain;
- u16 msi_msg;
dma_addr_t msi_data;
+ struct page *msi_page;
struct irq_chip *msi_irq_chip;
u32 num_vectors;
u32 irq_mask[MAX_MSI_CTRLS];
--
2.36.1.124.g0e6072fb45-goog
Powered by blists - more mailing lists