[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c600e9b2534d54082a5272b508a7985f@codeaurora.org>
Date: Thu, 10 Jun 2021 10:54:58 +0530
From: Sai Prakash Ranjan <saiprakash.ranjan@...eaurora.org>
To: Robin Murphy <robin.murphy@....com>
Cc: Will Deacon <will@...nel.org>, Joerg Roedel <joro@...tes.org>,
iommu@...ts.linux-foundation.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-arm-msm@...r.kernel.org
Subject: Re: [PATCH] iommu/io-pgtable-arm: Optimize partial walk flush for
large scatter-gather list
Hi Robin,
On 2021-06-10 00:14, Robin Murphy wrote:
> On 2021-06-09 15:53, Sai Prakash Ranjan wrote:
>> Currently for iommu_unmap() of large scatter-gather list with page
>> size
>> elements, the majority of time is spent in flushing of partial walks
>> in
>> __arm_lpae_unmap() which is a VA based TLB invalidation (TLBIVA for
>> arm-smmu).
>>
>> For example: to unmap a 32MB scatter-gather list with page size
>> elements
>> (8192 entries), there are 16->2MB buffer unmaps based on the pgsize
>> (2MB
>> for 4K granule) and each of 2MB will further result in 512 TLBIVAs
>> (2MB/4K)
>> resulting in a total of 8192 TLBIVAs (512*16) for 16->2MB causing a
>> huge
>> overhead.
>>
>> So instead use io_pgtable_tlb_flush_all() to invalidate the entire
>> context
>> if size (pgsize) is greater than the granule size (4K, 16K, 64K). For
>> this
>> example of 32MB scatter-gather list unmap, this results in just 16
>> ASID
>> based TLB invalidations or tlb_flush_all() callback (TLBIASID in case
>> of
>> arm-smmu) as opposed to 8192 TLBIVAs thereby increasing the
>> performance of
>> unmaps drastically.
>>
>> Condition (size > granule size) is chosen for
>> io_pgtable_tlb_flush_all()
>> because for any granule with supported pgsizes, we will have at least
>> 512
>> TLB invalidations for which tlb_flush_all() is already recommended.
>> For
>> example, take 4K granule with 2MB pgsize, this will result in 512
>> TLBIVA
>> in partial walk flush.
>>
>> Test on QTI SM8150 SoC for 10 iterations of iommu_{map_sg}/unmap:
>> (average over 10 iterations)
>>
>> Before this optimization:
>>
>> size iommu_map_sg iommu_unmap
>> 4K 2.067 us 1.854 us
>> 64K 9.598 us 8.802 us
>> 1M 148.890 us 130.718 us
>> 2M 305.864 us 67.291 us
>> 12M 1793.604 us 390.838 us
>> 16M 2386.848 us 518.187 us
>> 24M 3563.296 us 775.989 us
>> 32M 4747.171 us 1033.364 us
>>
>> After this optimization:
>>
>> size iommu_map_sg iommu_unmap
>> 4K 1.723 us 1.765 us
>> 64K 9.880 us 8.869 us
>> 1M 155.364 us 135.223 us
>> 2M 303.906 us 5.385 us
>> 12M 1786.557 us 21.250 us
>> 16M 2391.890 us 27.437 us
>> 24M 3570.895 us 39.937 us
>> 32M 4755.234 us 51.797 us
>>
>> This is further reduced once the map/unmap_pages() support gets in
>> which
>> will result in just 1 tlb_flush_all() as opposed to 16
>> tlb_flush_all().
>>
>> Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@...eaurora.org>
>> ---
>> drivers/iommu/io-pgtable-arm.c | 7 +++++--
>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/iommu/io-pgtable-arm.c
>> b/drivers/iommu/io-pgtable-arm.c
>> index 87def58e79b5..c3cb9add3179 100644
>> --- a/drivers/iommu/io-pgtable-arm.c
>> +++ b/drivers/iommu/io-pgtable-arm.c
>> @@ -589,8 +589,11 @@ static size_t __arm_lpae_unmap(struct
>> arm_lpae_io_pgtable *data,
>> if (!iopte_leaf(pte, lvl, iop->fmt)) {
>> /* Also flush any partial walks */
>> - io_pgtable_tlb_flush_walk(iop, iova, size,
>> - ARM_LPAE_GRANULE(data));
>> + if (size > ARM_LPAE_GRANULE(data))
>> + io_pgtable_tlb_flush_all(iop);
>> + else
>
> Erm, when will the above condition ever not be true? ;)
>
Ah right, silly me :)
> Taking a step back, though, what about the impact to drivers other
> than SMMUv2?
Other drivers would be msm_iommu.c, qcom_iommu.c which does the same
thing as arm-smmu-v2 (page based invalidations), then there is
ipmmu-vmsa.c
which does tlb_flush_all() for flush walk.
> In particular I'm thinking of SMMUv3.2 where the whole
> range can be invalidated by VA in a single command anyway, so the
> additional penalties of TLBIALL are undesirable.
>
Right, so I am thinking we can have a new generic quirk
IO_PGTABLE_QUIRK_RANGE_INV
to choose between range based invalidations(tlb_flush_walk) and
tlb_flush_all().
In this case of arm-smmu-v3.2, we can tie up ARM_SMMU_FEAT_RANGE_INV
with this quirk
and have something like below, thoughts?
if (iop->cfg.quirks & IO_PGTABLE_QUIRK_RANGE_INV)
io_pgtable_tlb_flush_walk(iop, iova, size,
ARM_LPAE_GRANULE(data));
else
io_pgtable_tlb_flush_all(iop);
Thanks,
Sai
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
member
of Code Aurora Forum, hosted by The Linux Foundation
Powered by blists - more mailing lists