[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e79a134b-89de-4da1-b64b-b890227fce8a@amd.com>
Date: Wed, 10 Sep 2025 14:11:58 +0200
From: Christian König <christian.koenig@....com>
To: Thadeu Lima de Souza Cascardo <cascardo@...lia.com>,
Huang Rui <ray.huang@....com>, Matthew Auld <matthew.auld@...el.com>,
Matthew Brost <matthew.brost@...el.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>
Cc: dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
kernel-dev@...lia.com, Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCH] drm: ttm: do not direct reclaim when allocating high
order pages
On 10.09.25 13:59, Thadeu Lima de Souza Cascardo wrote:
> When the TTM pool tries to allocate new pages, it stats with max order. If
> there are no pages ready in the system, the page allocator will start
> reclaim. If direct reclaim fails, the allocator will reduce the order until
> it gets all the pages it wants with whatever order the allocator succeeds
> to reclaim.
>
> However, while the allocator is reclaiming, lower order pages might be
> available, which would work just fine for the pool allocator. Doing direct
> reclaim just introduces latency in allocating memory.
>
> The system should still start reclaiming in the background with kswapd, but
> the pool allocator should try to allocate a lower order page instead of
> directly reclaiming.
>
> If not even a order-1 page is available, the TTM pool allocator will
> eventually get to start allocating order-0 pages, at which point it should
> and will directly reclaim.
Yeah that was discussed before quite a bit but at least for AMD GPUs that is absolutely not something we should do.
The performance difference between using high and low order pages can be up to 30%. So the added extra latency is just vital for good performance.
We could of course make that depend on the HW you use if it isn't necessary for some other GPU, but at least both NVidia and Intel seem to have pretty much the same HW restrictions.
NVidia has been working on extending this to even use 1GiB pages to reduce the TLB overhead even further.
Regards,
Christian.
>
> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@...lia.com>
> ---
> drivers/gpu/drm/ttm/ttm_pool.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> index baf27c70a4193a121fbc8b4e67cd6feb4c612b85..6124a53cd15634c833bce379093b557d2a2660fd 100644
> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> @@ -144,9 +144,11 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> * Mapping pages directly into an userspace process and calling
> * put_page() on a TTM allocated page is illegal.
> */
> - if (order)
> + if (order) {
> gfp_flags |= __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN |
> __GFP_THISNODE;
> + gfp_flags &= ~__GFP_DIRECT_RECLAIM;
> + }
>
> if (!pool->use_dma_alloc) {
> p = alloc_pages_node(pool->nid, gfp_flags, order);
>
> ---
> base-commit: b320789d6883cc00ac78ce83bccbfe7ed58afcf0
> change-id: 20250909-ttm_pool_no_direct_reclaim-ee0807a2d3fe
>
> Best regards,
Powered by blists - more mailing lists