lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yloj6GM+yykImKvp@Asurada-Nvidia>
Date:   Fri, 15 Apr 2022 19:03:20 -0700
From:   Nicolin Chen <nicolinc@...dia.com>
To:     Robin Murphy <robin.murphy@....com>
CC:     <will@...nel.org>, <joro@...tes.org>, <thunder.leizhen@...wei.com>,
        <jgg@...pe.ca>, <tglx@...utronix.de>, <john.garry@...wei.com>,
        <jean-philippe@...aro.org>, <christophe.jaillet@...adoo.fr>,
        <linux-arm-kernel@...ts.infradead.org>,
        <iommu@...ts.linux-foundation.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] iommu/arm-smmu-v3: Align size in __arm_smmu_tlb_inv_range

On Thu, Apr 14, 2022 at 11:32:38AM +0100, Robin Murphy wrote:
> > By looking at the call trace within arm_smmu_* functions:
> >    __arm_smmu_tlb_inv_range
> >    arm_smmu_tlb_inv_range_asid
> >    arm_smmu_mm_invalidate_range
> >    {from mm_notifier_* functions}
> > 
> > There's no address alignment check. Although I do think we
> > should fix the source who passes down the non-page-aligned
> > parameter, the SMMU driver shouldn't silently dead loop if
> > a set of unaligned inputs are given, IMHO.
> 
> Oh, sure, I'm not saying we definitely don't need to fix anything, I'd
> just like to get a better understanding of *what* we're fixing. I'd have
> (naively) expected the mm layer to give us page-aligned quantities even
> in the SVA notifier case, so if we've got a clear off-by-one somewhere
> in that path we should fix that before just blindly over-invalidating to
> paper over it; if we still also want to be robust at the SMMU driver end
> just in case, something like "if (WARN_ON(num_pages == 0)) num_pages =
> 1;" might be more appropriate. However if it turns out that we *can*
> actually end up with unsanitised input from some userspace unmap
> interface getting this far, then a silent fixup is the best option, but
> if so I'd still like to confirm that we're rounding in the same
> direction as whoever touched the pagetables (since it can't have been us).

I got some details:

[ 1008.868735] mmap: -------__do_munmap: range [ffffa4fd0000, ffffa4fe0000] len 10000
[ 1008.869183] -------arm_smmu_mm_invalidate_range: range [ffffa4fd0000, ffffa4fe0000] len 10001
[ 1009.056127] ------------[ cut here ]------------
[ 1009.345791] WARNING: CPU: 0 PID: 131 at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c:189 arm_smmu_mm_invalidate_range+0x4c/0xa8
[ 1009.605439] Modules linked in: nvidia(O)
[ 1009.692799] CPU: 0 PID: 131 Comm: dmaTest Tainted: G        W  O      5.15.0-tegra #30
[ 1009.865535] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
[ 1010.015871] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 1010.168191] pc : arm_smmu_mm_invalidate_range+0x4c/0xa8
[ 1010.283136] lr : arm_smmu_mm_invalidate_range+0x48/0xa8
[ 1010.397119] sp : ffff80001436fa60
[ 1010.469568] x29: ffff80001436fa60 x28: ffff00001840be80 x27: ffff000007b3fff0
[ 1010.629631] x26: 00e80000589f0f43 x25: ffff00001aa20288 x24: 0000000000000000
[ 1010.786432] x23: ffff0000138c1000 x22: ffff00001783aa00 x21: ffff00001c021380
[ 1010.944448] x20: 0000ffffa4fd0000 x19: 0000000000010001 x18: 0000000000000000
[ 1011.101568] x17: ffff80000e4b0000 x16: ffff800010010000 x15: 000081a13a744e89
[ 1011.259839] x14: 00000000000000ce x13: 00000000000000ce x12: 0000000000000000
[ 1011.415616] x11: 0000000000000010 x10: 00000000000009c0 x9 : ffff80001436f7f0
[ 1011.575552] x8 : ffff000013563420 x7 : ffff00001feb9180 x6 : 00000000000035aa
[ 1011.731775] x5 : 0000000000000000 x4 : ffff00001feb29e0 x3 : ffff00001feb5a78
[ 1011.887615] x2 : 66f9034381513000 x1 : 0000000000000000 x0 : 0000000000000051
[ 1012.042944] Call trace:
[ 1012.097919]  arm_smmu_mm_invalidate_range+0x4c/0xa8
[ 1012.204480]  __mmu_notifier_invalidate_range+0x68/0xb0
[ 1012.318208]  unmap_page_range+0x730/0x740
[ 1012.405951]  unmap_single_vma+0x4c/0xb0
[ 1012.521920]  unmap_vmas+0x70/0xf0
[ 1012.633727]  unmap_region+0xb0/0x110
[ 1012.753856]  __do_munmap+0x36c/0x460
[ 1012.855168]  __vm_munmap+0x70/0xd0
[ 1012.929791]  __arm64_sys_munmap+0x34/0x50
[ 1013.018944]  invoke_syscall.constprop.0+0x4c/0xe0
[ 1013.122047]  do_el0_svc+0x50/0x150
[ 1013.196415]  el0_svc+0x28/0xc0
[ 1013.262848]  el0t_64_sync_handler+0xb0/0xc0
[ 1013.355584]  el0t_64_sync+0x1a0/0x1a4
[ 1013.435903] ---[ end trace c95eb7dc909f29ba ]---

We can see from call trace and logs that the invalidation range
comes from __do_munmap() with end address = 0xffffa4fe0000.

The problem seems to be the difference between how mm and iommu
cores express their end addresses: mm core calculates end using
start + size, while iommu core subtracts 1 from that. So that
end address 0xffffa4fe0000 should be 0xffffa4fdffff in iommu's
way.

Perhaps we should simply do something like the following?

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index d816759a6bcf..e280568bb513 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -183,7 +183,7 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
 {
        struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
        struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
-       size_t size = end - start + 1;
+       size_t size = end - start;

        if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
                arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,

Thanks
Nic

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ