[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201019113100.23661-1-chao.hao@mediatek.com>
Date: Mon, 19 Oct 2020 19:30:56 +0800
From: Chao Hao <chao.hao@...iatek.com>
To: Joerg Roedel <joro@...tes.org>,
Matthias Brugger <matthias.bgg@...il.com>
CC: <iommu@...ts.linux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-mediatek@...ts.infradead.org>, <wsd_upstream@...iatek.com>,
Yong Wu <yong.wu@...iatek.com>, FY Yang <fy.yang@...iatek.com>,
Jun Wen <jun.wen@...iatek.com>,
Mingyuan Ma <mingyuan.ma@...iatek.com>,
Chao Hao <chao.hao@...iatek.com>
Subject: [PATCH 0/4] MTK_IOMMU: Optimize mapping / unmapping performance
For MTK platforms, mtk_iommu is using iotlb_sync(), tlb_add_range() and tlb_flush_walk/leaf()
to do tlb sync when iommu driver runs iova mapping/unmapping. But if buffer size is large,
it maybe consist of many pages(4K/8K/64K/1MB......). So iommu driver maybe run many times tlb
sync in mapping for this case and it will degrade performance seriously. In order to resolve the
issue, we hope to add iotlb_sync_range() callback in iommu_ops, it can appiont iova and size to
do tlb sync. MTK_IOMMU will use iotlb_sync_range() callback when the whole mapping/unmapping is
completed and remove iotlb_sync(), tlb_add_range() and tlb_flush_walk/leaf().
So this patchset will replace iotlb_sync(), tlb_add_range() and tlb_flush_walk/leaf() with
iotlb_sync_range() callback.
Chao Hao (4):
iommu: Introduce iotlb_sync_range callback
iommu/mediatek: Add iotlb_sync_range() support
iommu/mediatek: Remove unnecessary tlb sync
iommu/mediatek: Adjust iotlb_sync_range
drivers/iommu/dma-iommu.c | 9 +++++++++
drivers/iommu/iommu.c | 7 +++++++
drivers/iommu/mtk_iommu.c | 36 ++++++++----------------------------
include/linux/iommu.h | 2 ++
4 files changed, 26 insertions(+), 28 deletions(-)
--
2.18.0
Powered by blists - more mailing lists