[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250819114932.597600-5-dev@lankhorst.se>
Date: Tue, 19 Aug 2025 13:49:33 +0200
From: Maarten Lankhorst <dev@...khorst.se>
To: Lucas De Marchi <lucas.demarchi@...el.com>,
'Thomas Hellström' <thomas.hellstrom@...ux.intel.com>,
Rodrigo Vivi <rodrigo.vivi@...el.com>,
David Airlie <airlied@...il.com>,
Simona Vetter <simona@...ll.ch>,
Maarten Lankhorst <dev@...khorst.se>,
Maxime Ripard <mripard@...nel.org>,
Natalie Vock <natalie.vock@....de>,
Tejun Heo <tj@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
'Michal Koutný' <mkoutny@...e.com>,
Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
"'Liam R . Howlett'" <Liam.Howlett@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Mike Rapoport <rppt@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Thomas Zimmermann <tzimmermann@...e.de>
Cc: Michal Hocko <mhocko@...e.com>,
intel-xe@...ts.freedesktop.org,
dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org,
linux-mm@...ck.org
Subject: [RFC 0/3] cgroups: Add support for pinned device memory
When exporting dma-bufs to other devices, even when it is allowed to use
move_notify in some drivers, performance will degrade severely when
eviction happens.
A perticular example where this can happen is in a multi-card setup,
where PCI-E peer-to-peer is used to prevent using access to system memory.
If the buffer is evicted to system memory, not only the evicting GPU wher
the buffer resided is affected, but it will also stall the GPU that is
waiting on the buffer.
It also makes sense for long running jobs not to be preempted by having
its buffers evicted, so it will make sense to have the ability to pin
from system memory too.
This is dependant on patches by Dave Airlie, so it's not part of this
series yet. But I'm planning on extending pinning to the memory cgroup
controller in the future to handle this case.
Implementation details:
For each cgroup up until the root cgroup, the 'min' limit is checked
against currently effectively pinned value. If the value will go above
'min', the pinning attempt is rejected.
Pinned memory is handled slightly different and affects calculating
effective min/low values. Pinned memory is subtracted from both,
and needs to be added afterwards when calculating.
This is because increasing the amount of pinned memory, the amount of
free min/low memory decreases for all cgroups that are part of the
hierarchy.
Maarten Lankhorst (3):
page_counter: Allow for pinning some amount of memory
cgroup/dmem: Implement pinning device memory
drm/xe: Add DRM_XE_GEM_CREATE_FLAG_PINNED flag and implementation
drivers/gpu/drm/xe/xe_bo.c | 66 +++++++++++++++++++++-
drivers/gpu/drm/xe/xe_dma_buf.c | 10 +++-
include/linux/cgroup_dmem.h | 2 +
include/linux/page_counter.h | 8 +++
include/uapi/drm/xe_drm.h | 10 +++-
kernel/cgroup/dmem.c | 57 ++++++++++++++++++-
mm/page_counter.c | 98 ++++++++++++++++++++++++++++++---
7 files changed, 237 insertions(+), 14 deletions(-)
--
2.50.0
Powered by blists - more mailing lists