[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c451e343-4014-4de5-87b8-50429399adaa@linux.dev>
Date: Sat, 26 Oct 2024 01:19:42 +0800
From: Sui Jingfeng <sui.jingfeng@...ux.dev>
To: Lucas Stach <l.stach@...gutronix.de>,
Russell King <linux+etnaviv@...linux.org.uk>,
Christian Gmeiner <christian.gmeiner@...il.com>
Cc: David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
etnaviv@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] drm/etnaviv: Map and unmap the GPU VA range with
respect to its user size
Hi,
On 2024/10/7 18:17, Lucas Stach wrote:
> Am Samstag, dem 05.10.2024 um 03:42 +0800 schrieb Sui Jingfeng:
>> Since the GPU VA space is compact in terms of 4KiB unit, map and/or unmap
>> the area that doesn't belong to a context breaks the philosophy of PPAS.
>> That results in severe errors: GPU hang and MMU fault (page not present)
>> and such.
>>
>> Shrink the usuable size of etnaviv GEM buffer object to its user size,
>> instead of the original physical size of its backing memory.
>>
>> Signed-off-by: Sui Jingfeng <sui.jingfeng@...ux.dev>
>> ---
>> drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 28 +++++++++------------------
>> 1 file changed, 9 insertions(+), 19 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
>> index 6fbc62772d85..a52ec5eb0e3d 100644
>> --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
>> @@ -70,8 +70,10 @@ static int etnaviv_context_map(struct etnaviv_iommu_context *context,
>> }
>>
>> static int etnaviv_iommu_map(struct etnaviv_iommu_context *context, u32 iova,
>> + unsigned int user_len,
>> struct sg_table *sgt, int prot)
>> -{ struct scatterlist *sg;
>> +{
>> + struct scatterlist *sg;
>> unsigned int da = iova;
>> unsigned int i;
>> int ret;
>> @@ -81,7 +83,8 @@ static int etnaviv_iommu_map(struct etnaviv_iommu_context *context, u32 iova,
>>
>> for_each_sgtable_dma_sg(sgt, sg, i) {
>> phys_addr_t pa = sg_dma_address(sg) - sg->offset;
>> - size_t bytes = sg_dma_len(sg) + sg->offset;
>> + unsigned int phys_len = sg_dma_len(sg) + sg->offset;
>> + size_t bytes = MIN(phys_len, user_len);
>>
>> VERB("map[%d]: %08x %pap(%zx)", i, iova, &pa, bytes);
>>
>> @@ -89,6 +92,7 @@ static int etnaviv_iommu_map(struct etnaviv_iommu_context *context, u32 iova,
>> if (ret)
>> goto fail;
>>
>> + user_len -= bytes;
>> da += bytes;
>> }
> Since the MIN(phys_len, user_len) may limit the mapped amount in the
> wrong direction,
I was thinking that if this will could really happen.
'if (phys_len <= user_len)' is true, the 'bytes' is a number of multiple
of PAGE_SIZE. Since our sg table is created by drm_prime_pages_to_sg(),
so the program still works exactly some as before.
It only differs from previous when 'if (phys_len > user_len)' is true,
but then, it is the tail SG entry that the size of it is not a multiple
of PAGE_SIZE. The 'bytes' will be *exactly* the size of remain GPUVA
range we should map.
> I would think it would be good to add a
> WARN_ON(user_len != 0) after the dma SG iteration.
So the program here seems nearly always correct, no?
Or are you means that when the CPU PAGE size < 4KiB cases ?
I never ever have a CPU has < 4 KiB page configuration.
Regards,
Sui
>>
>> @@ -104,21 +108,7 @@ static int etnaviv_iommu_map(struct etnaviv_iommu_context *context, u32 iova,
>> static void etnaviv_iommu_unmap(struct etnaviv_iommu_context *context, u32 iova,
>> struct sg_table *sgt, unsigned len)
>> {
>> - struct scatterlist *sg;
>> - unsigned int da = iova;
>> - int i;
>> -
>> - for_each_sgtable_dma_sg(sgt, sg, i) {
>> - size_t bytes = sg_dma_len(sg) + sg->offset;
>> -
>> - etnaviv_context_unmap(context, da, bytes);
>> -
>> - VERB("unmap[%d]: %08x(%zx)", i, iova, bytes);
>> -
>> - BUG_ON(!PAGE_ALIGNED(bytes));
>> -
>> - da += bytes;
>> - }
>> + etnaviv_context_unmap(context, iova, len);
> This drops some sanity checks, but I have only ever seen them fire when
> we had other kernel memory corruption issues, so I'm fine with the
> simplification you did here.
>
> Regards,
> Lucas
>
>>
>> context->flush_seq++;
>> }
>> @@ -131,7 +121,7 @@ static void etnaviv_iommu_remove_mapping(struct etnaviv_iommu_context *context,
>> lockdep_assert_held(&context->lock);
>>
>> etnaviv_iommu_unmap(context, mapping->vram_node.start,
>> - etnaviv_obj->sgt, etnaviv_obj->base.size);
>> + etnaviv_obj->sgt, etnaviv_obj->user_size);
>> drm_mm_remove_node(&mapping->vram_node);
>> }
>>
>> @@ -314,7 +304,7 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu_context *context,
>> goto unlock;
>>
>> mapping->iova = node->start;
>> - ret = etnaviv_iommu_map(context, node->start, sgt,
>> + ret = etnaviv_iommu_map(context, node->start, user_size, sgt,
>> ETNAVIV_PROT_READ | ETNAVIV_PROT_WRITE);
>>
>> if (ret < 0) {
--
Best regards,
Sui
Powered by blists - more mailing lists