lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <13c91dfb-c540-ed8d-daa7-eab7207df221@arm.com>
Date:   Thu, 14 Apr 2022 11:32:38 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Nicolin Chen <nicolinc@...dia.com>
Cc:     will@...nel.org, joro@...tes.org, thunder.leizhen@...wei.com,
        jgg@...pe.ca, tglx@...utronix.de, john.garry@...wei.com,
        jean-philippe@...aro.org, christophe.jaillet@...adoo.fr,
        linux-arm-kernel@...ts.infradead.org,
        iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] iommu/arm-smmu-v3: Align size in __arm_smmu_tlb_inv_range

On 2022-04-13 21:19, Nicolin Chen wrote:
> Hi Robin,
> 
> On Wed, Apr 13, 2022 at 02:40:31PM +0100, Robin Murphy wrote:
>> On 2022-04-13 05:17, Nicolin Chen wrote:
>>> To calculate num_pages, the size should be aligned with
>>> "page size", determined by the tg value. Otherwise, its
>>> following "while (iova < end)" might become an infinite
>>> loop if unaligned size is slightly greater than 1 << tg.
>>
>> Hmm, how does a non-page-aligned invalidation request get generated in
>> the first place?
> 
> I don't have the testing environment because it was a bug,
> reported by a client who uses SVA feature on top of SMMU.
> 
> But judging from the log, the non-page-aligned inv request
> was coming from an likely incorrect end address, e.g.
> 	{ start = 0xff10000, end = 0xff20000 }
> So the size turned out to be 0x10001, unaligned.
> 
> I don't have a full call trace on hand right now to see if
> upper callers are doing something wrong when calculate the
> end address, though I've asked the owner to check.
> 
> By looking at the call trace within arm_smmu_* functions:
>    __arm_smmu_tlb_inv_range
>    arm_smmu_tlb_inv_range_asid
>    arm_smmu_mm_invalidate_range
>    {from mm_notifier_* functions}
> 
> There's no address alignment check. Although I do think we
> should fix the source who passes down the non-page-aligned
> parameter, the SMMU driver shouldn't silently dead loop if
> a set of unaligned inputs are given, IMHO.

Oh, sure, I'm not saying we definitely don't need to fix anything, I'd 
just like to get a better understanding of *what* we're fixing. I'd have 
(naively) expected the mm layer to give us page-aligned quantities even 
in the SVA notifier case, so if we've got a clear off-by-one somewhere 
in that path we should fix that before just blindly over-invalidating to 
paper over it; if we still also want to be robust at the SMMU driver end 
just in case, something like "if (WARN_ON(num_pages == 0)) num_pages = 
1;" might be more appropriate. However if it turns out that we *can* 
actually end up with unsanitised input from some userspace unmap 
interface getting this far, then a silent fixup is the best option, but 
if so I'd still like to confirm that we're rounding in the same 
direction as whoever touched the pagetables (since it can't have been us).

Thanks,
Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ