[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ea64d7bf-c01b-f4ad-a36b-f77e2c2ea931@linux.intel.com>
Date: Wed, 26 Jul 2023 12:14:24 +0200
From: Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>
To: Tejun Heo <tj@...nel.org>,
Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>
Cc: Intel-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
Johannes Weiner <hannes@...xchg.org>,
Zefan Li <lizefan.x@...edance.com>,
Dave Airlie <airlied@...hat.com>,
Daniel Vetter <daniel.vetter@...ll.ch>,
Rob Clark <robdclark@...omium.org>,
Stéphane Marchesin <marcheu@...omium.org>,
"T . J . Mercier" <tjmercier@...gle.com>, Kenny.Ho@....com,
Christian König <christian.koenig@....com>,
Brian Welty <brian.welty@...el.com>,
Tvrtko Ursulin <tvrtko.ursulin@...el.com>,
Eero Tamminen <eero.t.tamminen@...el.com>
Subject: Re: [PATCH 16/17] cgroup/drm: Expose memory stats
Hey,
On 2023-07-22 00:21, Tejun Heo wrote:
> On Wed, Jul 12, 2023 at 12:46:04PM +0100, Tvrtko Ursulin wrote:
>> $ cat drm.memory.stat
>> card0 region=system total=12898304 shared=0 active=0 resident=12111872 purgeable=167936
>> card0 region=stolen-system total=0 shared=0 active=0 resident=0 purgeable=0
>>
>> Data is generated on demand for simplicty of implementation ie. no running
>> totals are kept or accounted during migrations and such. Various
>> optimisations such as cheaper collection of data are possible but
>> deliberately left out for now.
>>
>> Overall, the feature is deemed to be useful to container orchestration
>> software (and manual management).
>>
>> Limits, either soft or hard, are not envisaged to be implemented on top of
>> this approach due on demand nature of collecting the stats.
>
> So, yeah, if you want to add memory controls, we better think through how
> the fd ownership migration should work.
I've taken a look at the series, since I have been working on cgroup
memory eviction.
The scheduling stuff will work for i915, since it has a purely software
execlist scheduler, but I don't think it will work for GuC (firmware)
scheduling or other drivers that use the generic drm scheduler.
For something like this, you would probably want it to work inside the
drm scheduler first. Presumably, this can be done by setting a weight on
each runqueue, and perhaps adding a callback to update one for a running
queue. Calculating the weights hierarchically might be fun..
I have taken a look at how the rest of cgroup controllers change
ownership when moved to a different cgroup, and the answer was: not at
all. If we attempt to create the scheduler controls only on the first
time the fd is used, you could probably get rid of all the tracking.
This can be done very easily with the drm scheduler.
WRT memory, I think the consensus is to track system memory like normal
memory. Stolen memory doesn't need to be tracked. It's kernel only
memory, used for internal bookkeeping only.
The only time userspace can directly manipulate stolen memory, is by
mapping the pinned initial framebuffer to its own address space. The
only allocation it can do is when a framebuffer is displayed, and
framebuffer compression creates some stolen memory. Userspace is not
aware of this though, and has no way to manipulate those contents.
Cheers,
~Maarten
Powered by blists - more mailing lists