[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230123154239.GA24348@blackbody.suse.cz>
Date: Mon, 23 Jan 2023 16:42:39 +0100
From: Michal Koutný <mkoutny@...e.com>
To: Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>
Cc: Intel-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
Tejun Heo <tj@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Zefan Li <lizefan.x@...edance.com>,
Dave Airlie <airlied@...hat.com>,
Daniel Vetter <daniel.vetter@...ll.ch>,
Rob Clark <robdclark@...omium.org>,
Stéphane Marchesin <marcheu@...omium.org>,
"T . J . Mercier" <tjmercier@...gle.com>, Kenny.Ho@....com,
Christian König <christian.koenig@....com>,
Brian Welty <brian.welty@...el.com>,
Tvrtko Ursulin <tvrtko.ursulin@...el.com>
Subject: Re: [RFC v3 00/12] DRM scheduling cgroup controller
Hello Tvrtko.
Interesting work.
On Thu, Jan 12, 2023 at 04:55:57PM +0000, Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com> wrote:
> Because of the heterogenous hardware and driver DRM capabilities, soft limits
> are implemented as a loose co-operative (bi-directional) interface between the
> controller and DRM core.
IIUC, this periodic scanning, calculating and applying could be partly
implemented with userspace utilities. (As you write, these limits are
best effort only, so it sounds to me such a total implementation is
unnecessary.)
I think a better approach would be to avoid the async querying and
instead require implementing explicit foo_charge_time(client, dur) API
(similar to how other controllers achieve this).
Your argument is the heterogenity of devices -- does it mean there are
devices/drivers that can't implement such a synchronous charging?
> DRM core provides an API to query per process GPU utilization and 2nd API to
> receive notification from the cgroup controller when the group enters or exits
> the over budget condition.
The return value of foo_charge_time() would substitute such a
notification synchronously. (By extension all clients in an affected
cgroup could be notified to achieve some broader actions.)
> Individual DRM drivers which implement the interface are expected to act on this
> in the best-effort manner only. There are no guarantees that the soft limits
> will be respected.
Back to original concern -- must all code reside in the kernel when it's
essentially advisory resource control?
> * DRM core is required to track all DRM clients belonging to processes so it
> can answer when asked how much GPU time is a process using.
> [...]
> * Individual drivers need to implement two similar hooks, but which work for
> a single DRM client. Over budget callback and GPU utilisation query.
This information is eventually aggregated for each process in a cgroup.
(And the action is carried on a single client, not a process.)
The per-process tracking seems like an additional indirection.
Could be the clients associated directly with DRM cgroup? [1]
Regards,
Michal
[1] I understand the sending a fd of a client is a regular operation, so
I'm not sure how cross-cg migrations would have to be handled in any
case.
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists