[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3c2de089-8f80-3644-7735-7df1c6151d70@molgen.mpg.de>
Date: Wed, 27 Oct 2021 18:45:35 +0200
From: Paul Menzel <pmenzel@...gen.mpg.de>
To: Robin Murphy <robin.murphy@....com>
Cc: x86@...nel.org, Xinhui Pan <Xinhui.Pan@....com>,
LKML <linux-kernel@...r.kernel.org>,
amd-gfx@...ts.freedesktop.org, iommu@...ts.linux-foundation.org,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Alex Deucher <alexander.deucher@....com>,
it+linux-iommu@...gen.mpg.de, Thomas Gleixner <tglx@...utronix.de>,
Christian König <christian.koenig@....com>,
Christian König <ckoenig.leichtzumerken@...il.com>,
Jörg Rödel <joro@...tes.org>,
Suravee Suthikulpanit <suravee.suthikulpanit@....com>
Subject: Re: I got an IOMMU IO page fault. What to do now?
Dear Robin,
On 25.10.21 18:01, Robin Murphy wrote:
> On 2021-10-25 12:23, Christian König wrote:
>> not sure how the IOMMU gives out addresses, but the printed ones look
>> suspicious to me. Something like we are using an invalid address like
>> -1 or similar.
>
> FWIW those look like believable DMA addresses to me, assuming that the
> DMA mapping APIs are being backed iommu_dma_ops and the device has a
> 40-bit DMA mask, since the IOVA allocator works top-down.
>
> Likely causes are either a race where the dma_unmap_*() call happens
> before the hardware has really stopped accessing the relevant addresses,
> or the device's DMA mask has been set larger than it should be, and thus
> the upper bits have been truncated in the round-trip through the hardware.
>
> Given the addresses involved, my suspicions would initially lean towards
> the latter case - the faults are in the very topmost pages which imply
> they're the first things mapped in that range. The other contributing
> factor being the trick that the IOVA allocator plays for PCI devices,
> where it tries to prefer 32-bit addresses. Thus you're only likely to
> see this happen once you already have ~3.5-4GB of live DMA-mapped memory
> to exhaust the 32-bit IOVA space (minus some reserved areas) and start
> allocating from the full DMA mask. You should be able to check that with
> a 5.13 or newer kernel by booting with "iommu.forcedac=1" and seeing if
> it breaks immediately (unfortunately with an older kernel you'd have to
> manually hack iommu_dma_alloc_iova() to the same effect).
I booted Linux 5.15-rc7 with `iommu.forcedac=1` and the system booted,
and I could log in remotely over SSH. Please find the Linux kernel
messages attached. (The system logs say lightdm failed to start, but it
might be some other issue due to a change in the operating system.)
>> Can you try that on an up to date kernel as well? E.g. ideally
>> bleeding edge amd-staging-drm-next from Alex repository.
Kind regards,
Paul
View attachment "20211027–linux-5.15-rc7–dell-optiplex-5055–iommu.forcedac.txt" of type "text/plain" (65662 bytes)
Powered by blists - more mailing lists