[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <16a9565e-5b01-e1c2-0f4a-d06db7f3b093@arm.com>
Date: Wed, 16 Dec 2020 12:10:22 +0000
From: Robin Murphy <robin.murphy@....com>
To: Yong Wu <yong.wu@...iatek.com>, Joerg Roedel <joro@...tes.org>,
Will Deacon <will@...nel.org>
Cc: youlin.pei@...iatek.com, anan.sun@...iatek.com,
Nicolas Boichat <drinkcat@...omium.org>,
srv_heupstream@...iatek.com, chao.hao@...iatek.com,
linux-kernel@...r.kernel.org,
Krzysztof Kozlowski <krzk@...nel.org>,
Tomasz Figa <tfiga@...gle.com>,
iommu@...ts.linux-foundation.org,
linux-mediatek@...ts.infradead.org,
Matthias Brugger <matthias.bgg@...il.com>,
Greg Kroah-Hartman <gregkh@...gle.com>,
kernel-team@...roid.com, linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v3 4/7] iommu: Switch gather->end to unsigned long long
On 2020-12-16 10:36, Yong Wu wrote:
> Currently gather->end is "unsigned long" which may be overflow in
> arch32 in the corner case: 0xfff00000 + 0x100000(iova + size).
> Although it doesn't affect the size(end - start), it affects the checking
> "gather->end < end"
This won't help the same situation at the top of a 64-bit address space,
though, and now that we have TTBR1 support for AArch64 format that is
definitely a thing. Better to just encode the end address as the actual
end address, i.e. iova + size - 1. We don't lose anything other than the
ability to encode zero-sized invalidations that don't make sense anyway.
Robin.
> Fixes: a7d20dc19d9e ("iommu: Introduce struct iommu_iotlb_gather for batching TLB flushes")
> Signed-off-by: Yong Wu <yong.wu@...iatek.com>
> ---
> include/linux/iommu.h | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 794d4085edd3..6e907a95d981 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -178,7 +178,7 @@ enum iommu_dev_features {
> */
> struct iommu_iotlb_gather {
> unsigned long start;
> - unsigned long end;
> + unsigned long long end;
> size_t pgsize;
> };
>
> @@ -537,7 +537,8 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
> struct iommu_iotlb_gather *gather,
> unsigned long iova, size_t size)
> {
> - unsigned long start = iova, end = start + size;
> + unsigned long start = iova;
> + unsigned long long end = start + size;
>
> /*
> * If the new page is disjoint from the current range or is mapped at
>
Powered by blists - more mailing lists