[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fbe9453d-53c5-44d1-a478-2496c5928603@redhat.com>
Date: Tue, 5 Mar 2024 14:40:43 +0100
From: Danilo Krummrich <dakr@...hat.com>
To: Duoming Zhou <duoming@....edu.cn>
Cc: nouveau@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
daniel@...ll.ch, airlied@...il.com, lyude@...hat.com, kherbst@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] nouveau/dmem: handle kcalloc() allocation failure
Hi Duoming,
On 3/3/24 08:53, Duoming Zhou wrote:
> The kcalloc() in nouveau_dmem_evict_chunk() will return null if
> the physical memory has run out. As a result, if we dereference
> src_pfns, dst_pfns or dma_addrs, the null pointer dereference bugs
> will happen.
>
> This patch uses stack variables to replace the kcalloc().
>
> Fixes: 249881232e14 ("nouveau/dmem: evict device private memory during release")
> Signed-off-by: Duoming Zhou <duoming@....edu.cn>
> ---
> drivers/gpu/drm/nouveau/nouveau_dmem.c | 13 +++++--------
> 1 file changed, 5 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> index 12feecf71e7..9a578262c6d 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> @@ -374,13 +374,13 @@ static void
> nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
> {
> unsigned long i, npages = range_len(&chunk->pagemap.range) >> PAGE_SHIFT;
> - unsigned long *src_pfns, *dst_pfns;
> - dma_addr_t *dma_addrs;
> + unsigned long src_pfns[npages], dst_pfns[npages];
> + dma_addr_t dma_addrs[npages];
Please don't use variable length arrays, this can potentially blow up
the stack.
As a fix I think we should allocate with __GFP_NOFAIL instead.
- Danilo
> struct nouveau_fence *fence;
>
> - src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL);
> - dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL);
> - dma_addrs = kcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL);
> + memset(src_pfns, 0, npages);
> + memset(dst_pfns, 0, npages);
> + memset(dma_addrs, 0, npages);
>
> migrate_device_range(src_pfns, chunk->pagemap.range.start >> PAGE_SHIFT,
> npages);
> @@ -406,11 +406,8 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
> migrate_device_pages(src_pfns, dst_pfns, npages);
> nouveau_dmem_fence_done(&fence);
> migrate_device_finalize(src_pfns, dst_pfns, npages);
> - kfree(src_pfns);
> - kfree(dst_pfns);
> for (i = 0; i < npages; i++)
> dma_unmap_page(chunk->drm->dev->dev, dma_addrs[i], PAGE_SIZE, DMA_BIDIRECTIONAL);
> - kfree(dma_addrs);
> }
>
> void
Powered by blists - more mailing lists