lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 13 Apr 2022 13:19:08 -0700
From:   Nicolin Chen <nicolinc@...dia.com>
To:     Robin Murphy <robin.murphy@....com>
CC:     <will@...nel.org>, <joro@...tes.org>, <thunder.leizhen@...wei.com>,
        <jgg@...pe.ca>, <tglx@...utronix.de>, <john.garry@...wei.com>,
        <jean-philippe@...aro.org>, <christophe.jaillet@...adoo.fr>,
        <linux-arm-kernel@...ts.infradead.org>,
        <iommu@...ts.linux-foundation.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] iommu/arm-smmu-v3: Align size in __arm_smmu_tlb_inv_range

Hi Robin,

On Wed, Apr 13, 2022 at 02:40:31PM +0100, Robin Murphy wrote:
> On 2022-04-13 05:17, Nicolin Chen wrote:
> > To calculate num_pages, the size should be aligned with
> > "page size", determined by the tg value. Otherwise, its
> > following "while (iova < end)" might become an infinite
> > loop if unaligned size is slightly greater than 1 << tg.
> 
> Hmm, how does a non-page-aligned invalidation request get generated in
> the first place?

I don't have the testing environment because it was a bug,
reported by a client who uses SVA feature on top of SMMU.

But judging from the log, the non-page-aligned inv request
was coming from an likely incorrect end address, e.g.
	{ start = 0xff10000, end = 0xff20000 }
So the size turned out to be 0x10001, unaligned.

I don't have a full call trace on hand right now to see if
upper callers are doing something wrong when calculate the
end address, though I've asked the owner to check.

By looking at the call trace within arm_smmu_* functions:
  __arm_smmu_tlb_inv_range
  arm_smmu_tlb_inv_range_asid
  arm_smmu_mm_invalidate_range
  {from mm_notifier_* functions}

There's no address alignment check. Although I do think we
should fix the source who passes down the non-page-aligned
parameter, the SMMU driver shouldn't silently dead loop if
a set of unaligned inputs are given, IMHO.

Thanks
Nic

Powered by blists - more mailing lists