[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8fd2c508-cbe9-4050-ba02-85b22fcff10d@collabora.com>
Date: Mon, 26 Jan 2026 10:03:19 +0100
From: Benjamin Gaignard <benjamin.gaignard@...labora.com>
To: Will Deacon <will@...nel.org>
Cc: joro@...tes.org, robin.murphy@....com, robh@...nel.org,
krzk+dt@...nel.org, conor+dt@...nel.org, heiko@...ech.de,
nicolas.dufresne@...labora.com, p.zabel@...gutronix.de, mchehab@...nel.org,
iommu@...ts.linux.dev, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-rockchip@...ts.infradead.org, linux-media@...r.kernel.org,
kernel@...labora.com
Subject: Re: [PATCH v11 3/7] iommu: Add verisilicon IOMMU driver
Le 23/01/2026 à 18:14, Will Deacon a écrit :
> On Wed, Jan 21, 2026 at 02:50:18PM +0100, Benjamin Gaignard wrote:
>> Le 21/01/2026 à 13:51, Will Deacon a écrit :
>>> On Mon, Jan 19, 2026 at 03:03:44PM +0100, Benjamin Gaignard wrote:
>>>>>>>>>> +static const struct iommu_ops vsi_iommu_ops = {
>>>>>>>>>> + .identity_domain = &vsi_identity_domain,
>>>>>>>>>> + .release_domain = &vsi_identity_domain,
>>>>>>>>>> + .domain_alloc_paging = vsi_iommu_domain_alloc_paging,
>>>>>>>>>> + .of_xlate = vsi_iommu_of_xlate,
>>>>>>>>>> + .probe_device = vsi_iommu_probe_device,
>>>>>>>>>> + .release_device = vsi_iommu_release_device,
>>>>>>>>>> + .device_group = generic_single_device_group,
>>>>>>>>>> + .owner = THIS_MODULE,
>>>>>>>>>> + .default_domain_ops = &(const struct iommu_domain_ops) {
>>>>>>>>>> + .attach_dev = vsi_iommu_attach_device,
>>>>>>>>>> + .map_pages = vsi_iommu_map,
>>>>>>>>>> + .unmap_pages = vsi_iommu_unmap,
>>>>>>>>>> + .flush_iotlb_all = vsi_iommu_flush_tlb_all,
>>>>>>>>> This has no callers and so your unmap routine appears to be broken.
>>>>>>>> It is a leftover of previous attempt to allow video decoder to clean/flush
>>>>>>>> the iommu by using a function from the API.
>>>>>>>> Now it is using vsi_iommu_restore_ctx().
>>>>>>>> I while remove it in version 12.
>>>>>>> Don't you still need some invalidation on the unmap path?
>>>>>> In vsi_iommu_unmap_iova() page is invalided by calling vsi_mk_pte_invalid().
>>>>> But that just writes an invalid descriptor and doesn't appear to invalidate
>>>>> the TLB at all.
>>>>>
>>>>>> That clear BIT(0) so the hardware knows the page is invalid.
>>>>>> Do I have miss something here ?
>>>>> Yes, the TLB structure needs to be invalidated so that the page-table
>>>>> walker sees the new value that you have written in memory.
>>>>>
>>>>> The rockchip driver gets this correct...
>>>> Rockchip hardware have a ZAP_ONE_LINE register which didn't exist on Verisilicon
>>>> hardware.
>>> Presumably you have some sort of Verisilicon datasheet or downstream driver
>>> from which you can infer the TLB invalidation runes?
>> I have only this downstream driver:
>> https://github.com/rockchip-linux/kernel/blob/develop-6.1/drivers/iommu/rockchip-iommu-av1d.c
>> No datasheet...
>>
>>>> I have tried to use VSI_MMU_BIT_FLUSH on VSI driver after unmapping iova
>>>> but it doesn't work.
>>> What do you mean by "doesn't work"? If it works without doing any
>>> invalidation at all, then it's very peculiar that adding the invalidation
>>> would introduce issues.
>> I mean VSI_MMU_BIT_FLUSH register can't be used to invalid the TLB.
>> I think the hardware iterates over the pages tables in memory and
>> check the valid/invalid bit.
> I bet it doesn't: that would be horrible for performance.
>
> The hardware clearly has TLB invalidation support, as the downstream driver
> that you linked above implements av1_iommu_flush_tlb_all() to poke it.
> If the hardware has a TLB, then unmapping a page-table means you need to:
>
> 1. Clear the valid bit from the descriptor in memory
> 2. Have some sort of memory barrier
> 3. Invalidate the TLB
> 4. Wait for the invalidation to complete
That exactly what I had tried to do by calling vsi_iommu_flush_tlb_all() (minux the lock)
after calling vsi_iommu_unmap_iova() in vsi_iommu_unmap() but that doesn't work
and even make the system crash sometimes.
Benjamin
>
> All IOMMUs tend to work like that and I don't think this one is any
> different.
>
> Will
>
Powered by blists - more mailing lists