[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67c83b24-01b6-4633-8645-52dc746c32e2@igalia.com>
Date: Fri, 19 Sep 2025 08:43:34 +0100
From: Tvrtko Ursulin <tvrtko.ursulin@...lia.com>
To: Christian König <christian.koenig@....com>,
Thadeu Lima de Souza Cascardo <cascardo@...lia.com>,
Michel Dänzer <michel.daenzer@...lbox.org>,
Huang Rui <ray.huang@....com>, Matthew Auld <matthew.auld@...el.com>,
Matthew Brost <matthew.brost@...el.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>
Cc: amd-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, kernel-dev@...lia.com,
Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCH RFC v2 0/3] drm/ttm: allow direct reclaim to be skipped
On 19/09/2025 07:46, Christian König wrote:
> On 18.09.25 22:09, Thadeu Lima de Souza Cascardo wrote:
>> On certain workloads, like on ChromeOS when opening multiple tabs and
>> windows, and switching desktops, memory pressure can build up and latency
>> is observed as high order allocations result in memory reclaim. This was
>> observed when running on an amdgpu.
>>
>> This is caused by TTM pool allocations and turning off direct reclaim when
>> doing those higher order allocations leads to lower memory pressure.
>>
>> Since turning direct reclaim off might also lead to lower throughput,
>> make it tunable, both as a module parameter that can be changed in sysfs
>> and as a flag when allocating a GEM object.
>>
>> A latency option will avoid direct reclaim for higher order allocations.
>>
>> The throughput option could be later used to more agressively compact pages
>> or reclaim, by not using __GFP_NORETRY.
>
> Well I can only repeat it, at least for amdgpu that is a clear NAK from my side to this.
>
> The behavior to allocate huge pages is a must have for the driver.
Disclaimer that I wouldn't go system-wide but per device - so somewhere
in sysfs rather than a modparam. That kind of a toggle would not sound
problematic to me since it leaves the policy outside the kernel and
allows people to tune to their liking.
One side question thought - does AMD benefit from larger than 2MiB
contiguous blocks? IIUC the maximum PTE is 2MiB so maybe not? In which
case it may make sense to add some TTM API letting drivers tell the pool
allocator what is the maximum order to bother with. Larger than that may
have diminishing benefit for the disproportionate pressure on the memory
allocator and reclaim.
Regards,
Tvrtko
> The alternative I can offer is to disable the fallback which in your case would trigger the OOM killer.
>
> Regards,
> Christian.
>
>>
>> Other drivers can later opt to use this mechanism too.
>>
>> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@...lia.com>
>> ---
>> Changes in v2:
>> - Make disabling direct reclaim an option.
>> - Link to v1: https://lore.kernel.org/r/20250910-ttm_pool_no_direct_reclaim-v1-1-53b0fa7f80fa@igalia.com
>>
>> ---
>> Thadeu Lima de Souza Cascardo (3):
>> ttm: pool: allow requests to prefer latency over throughput
>> ttm: pool: add a module parameter to set latency preference
>> drm/amdgpu: allow allocation preferences when creating GEM object
>>
>> drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 3 ++-
>> drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 3 ++-
>> drivers/gpu/drm/ttm/ttm_pool.c | 23 +++++++++++++++++------
>> drivers/gpu/drm/ttm/ttm_tt.c | 2 +-
>> include/drm/ttm/ttm_bo.h | 5 +++++
>> include/drm/ttm/ttm_pool.h | 2 +-
>> include/drm/ttm/ttm_tt.h | 2 +-
>> include/uapi/drm/amdgpu_drm.h | 9 +++++++++
>> 8 files changed, 38 insertions(+), 11 deletions(-)
>> ---
>> base-commit: f83ec76bf285bea5727f478a68b894f5543ca76e
>> change-id: 20250909-ttm_pool_no_direct_reclaim-ee0807a2d3fe
>>
>> Best regards,
>
Powered by blists - more mailing lists