[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <TY2PR0101MB313681440D640B215C5DF5E784FCA@TY2PR0101MB3136.apcprd01.prod.exchangelabs.com>
Date: Mon, 25 Sep 2023 03:59:03 +0000
From: Kelly Devilliv <kelly.devilliv@...look.com>
To: "robin.murphy@....com" <robin.murphy@....com>,
"joro@...tes.org" <joro@...tes.org>,
"will@...nel.org" <will@...nel.org>
CC: "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: dma_map_resource() has a bad performance in pcie peer to peer
transactions when iommu enabled in Linux
Dear all,
I am working on an ARM-V8 server with two gpu cards on it. Recently, I need to test pcie peer to peer communication between the two gpu cards, but the throughput is only 4GB/s.
After I explored the gpu's kernel mode driver, I found it was using the dma_map_resource() API to map the peer device's MMIO space. The arm iommu driver then will hardcode a 'IOMMU_MMIO' prot in the later dma map:
static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
return __iommu_dma_map(dev, phys, size,
dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO,
dma_get_mask(dev));
}
And that will finally set the 'ARM_LPAE_PTE_MEMATTR_DEV' attribute in PTE, which may have a negative impact on the performance of the pcie peer to peer transactions.
/*
* Note that this logic is structured to accommodate Mali LPAE
* having stage-1-like attributes but stage-2-like permissions.
*/
if (data->iop.fmt == ARM_64_LPAE_S2 ||
data->iop.fmt == ARM_32_LPAE_S2) {
if (prot & IOMMU_MMIO)
pte |= ARM_LPAE_PTE_MEMATTR_DEV;
else if (prot & IOMMU_CACHE)
pte |= ARM_LPAE_PTE_MEMATTR_OIWB;
else
pte |= ARM_LPAE_PTE_MEMATTR_NC;
} else {
if (prot & IOMMU_MMIO)
pte |= (ARM_LPAE_MAIR_ATTR_IDX_DEV
<< ARM_LPAE_PTE_ATTRINDX_SHIFT);
else if (prot & IOMMU_CACHE)
pte |= (ARM_LPAE_MAIR_ATTR_IDX_CACHE
<< ARM_LPAE_PTE_ATTRINDX_SHIFT);
}
I tried to remove the 'IOMMU_MMIO' prot in the dma_map_resource() API and re-compile the linux kernel, the throughput then can be up to 28GB/s.
Is there an elegant way to solve this issue without modifying the linux kernel? e.g., a substitution of dma_map_resource() API?
Thank you!
Platform info:
Linux kernel version: 5.10
PCIE GEN4 x16
Sincerely,
Kelly
Powered by blists - more mailing lists