[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZMF3rLioJK9QJ0yj@slm.duckdns.org>
Date: Wed, 26 Jul 2023 09:44:44 -1000
From: Tejun Heo <tj@...nel.org>
To: Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>,
Intel-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
Johannes Weiner <hannes@...xchg.org>,
Zefan Li <lizefan.x@...edance.com>,
Dave Airlie <airlied@...hat.com>,
Daniel Vetter <daniel.vetter@...ll.ch>,
Rob Clark <robdclark@...omium.org>,
Stéphane Marchesin <marcheu@...omium.org>,
"T . J . Mercier" <tjmercier@...gle.com>, Kenny.Ho@....com,
Christian König <christian.koenig@....com>,
Brian Welty <brian.welty@...el.com>,
Tvrtko Ursulin <tvrtko.ursulin@...el.com>,
Eero Tamminen <eero.t.tamminen@...el.com>
Subject: Re: [PATCH 16/17] cgroup/drm: Expose memory stats
Hello,
On Wed, Jul 26, 2023 at 12:14:24PM +0200, Maarten Lankhorst wrote:
> > So, yeah, if you want to add memory controls, we better think through how
> > the fd ownership migration should work.
>
> I've taken a look at the series, since I have been working on cgroup memory
> eviction.
>
> The scheduling stuff will work for i915, since it has a purely software
> execlist scheduler, but I don't think it will work for GuC (firmware)
> scheduling or other drivers that use the generic drm scheduler.
>
> For something like this, you would probably want it to work inside the drm
> scheduler first. Presumably, this can be done by setting a weight on each
> runqueue, and perhaps adding a callback to update one for a running queue.
> Calculating the weights hierarchically might be fun..
I don't have any idea on this front. The basic idea of making high level
distribution decisions in core code and letting individual drivers enforce
that in a way which fits them the best makes sense to me but I don't know
enough to have an opinion here.
> I have taken a look at how the rest of cgroup controllers change ownership
> when moved to a different cgroup, and the answer was: not at all. If we
For persistent resources, that's the general rule. Whoever instantiates a
resource gets to own it until the resource gets freed. There is an exception
with the pid controller and there are discussions around whether we want
some sort of migration behavior with memcg but yes by and large instantiator
being the owner is the general model cgroup follows.
> attempt to create the scheduler controls only on the first time the fd is
> used, you could probably get rid of all the tracking.
> This can be done very easily with the drm scheduler.
>
> WRT memory, I think the consensus is to track system memory like normal
> memory. Stolen memory doesn't need to be tracked. It's kernel only memory,
> used for internal bookkeeping only.
>
> The only time userspace can directly manipulate stolen memory, is by mapping
> the pinned initial framebuffer to its own address space. The only allocation
> it can do is when a framebuffer is displayed, and framebuffer compression
> creates some stolen memory. Userspace is not
> aware of this though, and has no way to manipulate those contents.
So, my dumb understanding:
* Ownership of an fd can be established on the first ioctl call and doesn't
need to be migrated afterwards. There are no persistent resources to
migration on the first call.
* Memory then can be tracked in a similar way to memcg. Memory gets charged
to the initial instantiator and doesn't need to be moved around
afterwards. There may be some discrepancies around stolen memory but the
magnitude of inaccuracy introduced that way is limited and bound and can
be safely ignored.
Is that correct?
Thanks.
--
tejun
Powered by blists - more mailing lists