lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aM06y7MP6LzHMBK7@quatroqueijos.cascardo.eti.br>
Date: Fri, 19 Sep 2025 08:13:15 -0300
From: Thadeu Lima de Souza Cascardo <cascardo@...lia.com>
To: Christian König <christian.koenig@....com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@...lia.com>,
	Michel Dänzer <michel.daenzer@...lbox.org>,
	Huang Rui <ray.huang@....com>,
	Matthew Auld <matthew.auld@...el.com>,
	Matthew Brost <matthew.brost@...el.com>,
	Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
	Maxime Ripard <mripard@...nel.org>,
	Thomas Zimmermann <tzimmermann@...e.de>,
	David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
	amd-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
	linux-kernel@...r.kernel.org, kernel-dev@...lia.com,
	Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCH RFC v2 0/3] drm/ttm: allow direct reclaim to be skipped

On Fri, Sep 19, 2025 at 10:01:26AM +0200, Christian König wrote:
> On 19.09.25 09:43, Tvrtko Ursulin wrote:
> > On 19/09/2025 07:46, Christian König wrote:
> >> On 18.09.25 22:09, Thadeu Lima de Souza Cascardo wrote:
> >>> On certain workloads, like on ChromeOS when opening multiple tabs and
> >>> windows, and switching desktops, memory pressure can build up and latency
> >>> is observed as high order allocations result in memory reclaim. This was
> >>> observed when running on an amdgpu.
> >>>
> >>> This is caused by TTM pool allocations and turning off direct reclaim when
> >>> doing those higher order allocations leads to lower memory pressure.
> >>>
> >>> Since turning direct reclaim off might also lead to lower throughput,
> >>> make it tunable, both as a module parameter that can be changed in sysfs
> >>> and as a flag when allocating a GEM object.
> >>>
> >>> A latency option will avoid direct reclaim for higher order allocations.
> >>>
> >>> The throughput option could be later used to more agressively compact pages
> >>> or reclaim, by not using __GFP_NORETRY.
> >>
> >> Well I can only repeat it, at least for amdgpu that is a clear NAK from my side to this.
> >>
> >> The behavior to allocate huge pages is a must have for the driver.
> > 
> > Disclaimer that I wouldn't go system-wide but per device - so somewhere in sysfs rather than a modparam. That kind of a toggle would not sound problematic to me since it leaves the policy outside the kernel and allows people to tune to their liking.
> 
> Yeah I've also wrote before when that is somehow beneficial for nouveau (for example) then I don't have any problem with making the policy device dependent.
> 
> But for amdgpu we have so many so bad experiences with this approach that I absolutely can't accept that.

The mechanism here allows it to be set per device. I even considered that
as a patch in the RFC, but I opted to get it out sooner so we could have
this discussion.

> 
> > One side question thought - does AMD benefit from larger than 2MiB contiguous blocks? IIUC the maximum PTE is 2MiB so maybe not? In which case it may make sense to add some TTM API letting drivers tell the pool allocator what is the maximum order to bother with. Larger than that may have diminishing benefit for the disproportionate pressure on the memory allocator and reclaim.
> 
> Using 1GiB allocations would allow for the page tables to skip another layer on AMD GPUs, but the most benefit is between 4kiB and 2MiB since that can be handled more efficiently by the L1. Having 2MiB allocations then also has an additional benefit for L2.
> 
> Apart from performance for AMD GPUs there are also some HW features which only work with huge pages, e.g. on some laptops you can get for example flickering on the display if the scanout buffer is back by to many small pages.
> 
> NVidia used to work on 1GiB allocations which as far as I know was the kickoff for the whole ongoing switch to using folios instead of pages. And from reading public available documentation I have the impression that NVidia GPUs works more or less the same as AMD GPUs regarding the TLB.
> 
> Another alternative would be that we add a WARN_ONCE() when we have to fallback to lower order pages, but that wouldn't help the end user either. It just makes it more obvious that you need more memory for a specific use case without triggering the OOM killer.
> 
> Regards,
> Christian.
> 
> > 
> > Regards,
> > 
> > Tvrtko
> > 
> >> The alternative I can offer is to disable the fallback which in your case would trigger the OOM killer.
> >>

Warning could be as simple as removing __GFP_NOWARN. But I don't think we
want either a warning or to trigger the OOM killer when allocating lower
order pages are still possible. That will already happen when we get to 0
order pages, where there is no fallback available anymore, and, then, it
makes sense to try harder and warn if no page can be allocated.

Under my current workload, the balance skews torwards 0-order pages,
reducing the amount of 10 and 9 order pages to half, when comparing runs
with direct reclaim and without direct reclaim. So, I understand your
concern in respect to the impact on the GPU TLB and potential flickering.
Is there a way we can measure it on the devices we are using? And, then, if
it does not show to be a problem on those devices, would making this be a
setting per-device be acceptable to you? In a way that we could have in
userspace a list of devices where it is okay to prefer not to reclaim over
getting huge pages and that could be set if the workload prefers lower
latency in those allocations?

Thanks.
Cascardo.

> >> Regards,
> >> Christian.
> >>
> >>>
> >>> Other drivers can later opt to use this mechanism too.
> >>>
> >>> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@...lia.com>
> >>> ---
> >>> Changes in v2:
> >>> - Make disabling direct reclaim an option.
> >>> - Link to v1: https://lore.kernel.org/r/20250910-ttm_pool_no_direct_reclaim-v1-1-53b0fa7f80fa@igalia.com
> >>>
> >>> ---
> >>> Thadeu Lima de Souza Cascardo (3):
> >>>        ttm: pool: allow requests to prefer latency over throughput
> >>>        ttm: pool: add a module parameter to set latency preference
> >>>        drm/amdgpu: allow allocation preferences when creating GEM object
> >>>
> >>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c    |  3 ++-
> >>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  3 ++-
> >>>   drivers/gpu/drm/ttm/ttm_pool.c             | 23 +++++++++++++++++------
> >>>   drivers/gpu/drm/ttm/ttm_tt.c               |  2 +-
> >>>   include/drm/ttm/ttm_bo.h                   |  5 +++++
> >>>   include/drm/ttm/ttm_pool.h                 |  2 +-
> >>>   include/drm/ttm/ttm_tt.h                   |  2 +-
> >>>   include/uapi/drm/amdgpu_drm.h              |  9 +++++++++
> >>>   8 files changed, 38 insertions(+), 11 deletions(-)
> >>> ---
> >>> base-commit: f83ec76bf285bea5727f478a68b894f5543ca76e
> >>> change-id: 20250909-ttm_pool_no_direct_reclaim-ee0807a2d3fe
> >>>
> >>> Best regards,
> >>
> > 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ