[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fc80adaa-3bbc-4331-abd3-3cfbff9b3dcd@redhat.com>
Date: Fri, 8 Mar 2024 17:45:32 +0100
From: Danilo Krummrich <dakr@...hat.com>
To: Duoming Zhou <duoming@....edu.cn>
Cc: dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
daniel@...ll.ch, airlied@...il.com, lyude@...hat.com, kherbst@...hat.com,
timur@...nel.org, jani.nikula@...ux.intel.com, nouveau@...ts.freedesktop.org
Subject: Re: [PATCH v3] nouveau/dmem: handle kcalloc() allocation failure
On 3/6/24 06:01, Duoming Zhou wrote:
> The kcalloc() in nouveau_dmem_evict_chunk() will return null if
> the physical memory has run out. As a result, if we dereference
> src_pfns, dst_pfns or dma_addrs, the null pointer dereference bugs
> will happen.
>
> Moreover, the GPU is going away. If the kcalloc() fails, we could not
> evict all pages mapping a chunk. So this patch adds a __GFP_NOFAIL
> flag in kcalloc().
>
> Finally, as there is no need to have physically contiguous memory,
> this patch switches kcalloc() to kvcalloc() in order to avoid
> failing allocations.
>
> Fixes: 249881232e14 ("nouveau/dmem: evict device private memory during release")
> Suggested-by: Danilo Krummrich <dakr@...hat.com>
> Signed-off-by: Duoming Zhou <duoming@....edu.cn>
Applied to drm-misc-fixes, thanks!
> ---
> Changes in v3:
> - Switch kcalloc() to kvcalloc().
>
> drivers/gpu/drm/nouveau/nouveau_dmem.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> index 12feecf71e7..6fb65b01d77 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> @@ -378,9 +378,9 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
> dma_addr_t *dma_addrs;
> struct nouveau_fence *fence;
>
> - src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL);
> - dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL);
> - dma_addrs = kcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL);
> + src_pfns = kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL);
> + dst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL);
> + dma_addrs = kvcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL | __GFP_NOFAIL);
>
> migrate_device_range(src_pfns, chunk->pagemap.range.start >> PAGE_SHIFT,
> npages);
> @@ -406,11 +406,11 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
> migrate_device_pages(src_pfns, dst_pfns, npages);
> nouveau_dmem_fence_done(&fence);
> migrate_device_finalize(src_pfns, dst_pfns, npages);
> - kfree(src_pfns);
> - kfree(dst_pfns);
> + kvfree(src_pfns);
> + kvfree(dst_pfns);
> for (i = 0; i < npages; i++)
> dma_unmap_page(chunk->drm->dev->dev, dma_addrs[i], PAGE_SIZE, DMA_BIDIRECTIONAL);
> - kfree(dma_addrs);
> + kvfree(dma_addrs);
> }
>
> void
Powered by blists - more mailing lists