[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aGQ6eiL22KIMjSGo@kbusch-mbp>
Date: Tue, 1 Jul 2025 15:43:54 -0400
From: Keith Busch <kbusch@...nel.org>
To: Ben Copeland <ben.copeland@...aro.org>
Cc: Christoph Hellwig <hch@....de>, linux-kernel@...r.kernel.org,
lkft-triage@...ts.linaro.org, regressions@...ts.linux.dev,
linux-nvme@...ts.infradead.org,
Dan Carpenter <dan.carpenter@...aro.org>, axboe@...nel.dk,
sagi@...mberg.me, iommu@...ts.linux.dev
Subject: Re: next-20250627: IOMMU DMA warning during NVMe I/O completion
after 06cae0e3f61c
On Mon, Jun 30, 2025 at 08:51:01PM +0100, Ben Copeland wrote:
>
> [ 1.083447] arm-smmu-v3 arm-smmu-v3.16.auto: option mask 0x0
> [ 1.083460] arm-smmu-v3 arm-smmu-v3.16.auto: IDR0.HTTU features(0x600000) overridden by FW configuration (0x0)
> [ 1.083463] arm-smmu-v3 arm-smmu-v3.16.auto: ias 48-bit, oas 48-bit (features 0x0094dfef)
Neat, I have machine with the same. The iommu granularity appears to be
able to take on various values, so let's see what the iommu group's
pgsize_bitmap is on mine with some bpf and drgn magic:
# bpftrace -e 'kfunc:iommu_group_show_type { printf("iommu_group: 0x%lx\n", args->group); }' &
# cat /sys/kernel/iommu_groups/0/type
iommu_group: 0xffff0000b54d0600
# drgn
>>> hex(Object(prog, "struct iommu_group *", 0xffff0000b54d0600).default_domain.pgsize_bitmap)
'0x20010000'
So 64k is the minimum iommu granularity in its current configuration.
Definitely too big to guarantee coalescing nvme segments, so if yours is
similarly configured, that explains why it takes the path you're taking.
Powered by blists - more mailing lists