lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 23 Sep 2021 19:58:17 +0800 From: Yong Wu <yong.wu@...iatek.com> To: Joerg Roedel <joro@...tes.org>, Rob Herring <robh+dt@...nel.org>, Matthias Brugger <matthias.bgg@...il.com>, Will Deacon <will@...nel.org>, Robin Murphy <robin.murphy@....com> CC: Krzysztof Kozlowski <krzysztof.kozlowski@...onical.com>, Tomasz Figa <tfiga@...omium.org>, <linux-mediatek@...ts.infradead.org>, <srv_heupstream@...iatek.com>, <devicetree@...r.kernel.org>, <linux-kernel@...r.kernel.org>, <linux-arm-kernel@...ts.infradead.org>, <iommu@...ts.linux-foundation.org>, Hsin-Yi Wang <hsinyi@...omium.org>, <yong.wu@...iatek.com>, <youlin.pei@...iatek.com>, <anan.sun@...iatek.com>, <chao.hao@...iatek.com>, <yen-chang.chen@...iatek.com> Subject: [PATCH v3 10/33] iommu/mediatek: Add tlb_lock in tlb_flush_all The tlb_flush_all also touches the registers about tlb operations. Add spinlock in it to protect the tlb registers. since the tlb_range already hold the spinlock, move it a bit outside the spinlock to print log. Signed-off-by: Yong Wu <yong.wu@...iatek.com> --- drivers/iommu/mtk_iommu.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c index 0b4c30baa864..ab484d20b441 100644 --- a/drivers/iommu/mtk_iommu.c +++ b/drivers/iommu/mtk_iommu.c @@ -204,15 +204,24 @@ static struct mtk_iommu_domain *to_mtk_domain(struct iommu_domain *dom) return container_of(dom, struct mtk_iommu_domain, domain); } -static void mtk_iommu_tlb_flush_all(struct mtk_iommu_data *data) +static void mtk_iommu_tlb_do_flush_all(struct mtk_iommu_data *data) { - if (pm_runtime_get_if_in_use(data->dev) <= 0) - return; + unsigned long flags; + spin_lock_irqsave(&data->tlb_lock, flags); writel_relaxed(F_INVLD_EN1 | F_INVLD_EN0, data->base + data->plat_data->inv_sel_reg); writel_relaxed(F_ALL_INVLD, data->base + REG_MMU_INVALIDATE); wmb(); /* Make sure the tlb flush all done */ + spin_unlock_irqrestore(&data->tlb_lock, flags); +} + +static void mtk_iommu_tlb_flush_all(struct mtk_iommu_data *data) +{ + if (pm_runtime_get_if_in_use(data->dev) <= 0) + return; + + mtk_iommu_tlb_do_flush_all(data); pm_runtime_put(data->dev); } @@ -247,14 +256,16 @@ static void mtk_iommu_tlb_flush_range_sync(unsigned long iova, size_t size, /* tlb sync */ ret = readl_poll_timeout_atomic(data->base + REG_MMU_CPE_DONE, tmp, tmp != 0, 10, 1000); + + /* Clear the CPE status */ + writel_relaxed(0, data->base + REG_MMU_CPE_DONE); + spin_unlock_irqrestore(&data->tlb_lock, flags); + if (ret) { dev_warn(data->dev, "Partial TLB flush timed out, falling back to full flush\n"); - mtk_iommu_tlb_flush_all(data); + mtk_iommu_tlb_do_flush_all(data); } - /* Clear the CPE status */ - writel_relaxed(0, data->base + REG_MMU_CPE_DONE); - spin_unlock_irqrestore(&data->tlb_lock, flags); if (has_pm) pm_runtime_put(data->dev); -- 2.18.0
Powered by blists - more mailing lists