dmar_init_reserved_ranges() reserves the card's MMIO ranges to prevent handing out a DMA map that would overlap with the MMIO range. The problem while the Nvidia GPU has 64bit BARs, it's capable of receiving > 40bit PIOs, but can't generate > 40bit DMAs. So when the iommu code reserves these MMIO ranges a > 40bit entry ends up getting in the rbtree. On a UV test system with the Nvidia cards, the BARs are: 0001:36:00.0 VGA compatible controller: nVidia Corporation GT200GL Region 0: Memory at 92000000 (32-bit, non-prefetchable) [size=16M] Region 1: Memory at f8200000000 (64-bit, prefetchable) [size=256M] Region 3: Memory at 90000000 (64-bit, non-prefetchable) [size=32M] So this 44bit MMIO address 0xf8200000000 ends up in the rbtree. As DMA maps get added and deleted from the rbtree we can end up getting a cached entry to this 0xf8200000000 entry... this is what results in the code handing out the invalid DMA map of 0xf81fffff000: [ 0xf8200000000-1 >> PAGE_SIZE << PAGE_SIZE ] The IOVA code needs to better honor the "limit_pfn" when allocating these maps. Signed-off-by: Mike Travis Reviewed-by: Mike Habeck --- drivers/pci/intel-iommu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- linux.orig/drivers/pci/intel-iommu.c +++ linux/drivers/pci/intel-iommu.c @@ -1323,7 +1323,8 @@ static void dmar_init_reserved_ranges(vo for (i = 0; i < PCI_NUM_RESOURCES; i++) { r = &pdev->resource[i]; - if (!r->flags || !(r->flags & IORESOURCE_MEM)) + if (!r->flags || !(r->flags & IORESOURCE_MEM) || + r->start > pdev->dma_mask) continue; iova = reserve_iova(&reserved_iova_list, IOVA_PFN(r->start), -- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/