[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20220323104020.GI8477@blackbody.suse.cz>
Date: Wed, 23 Mar 2022 11:40:20 +0100
From: Michal Koutný <mkoutny@...e.com>
To: "T.J. Mercier" <tjmercier@...gle.com>
Cc: Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...ux.ie>,
Daniel Vetter <daniel@...ll.ch>,
Jonathan Corbet <corbet@....net>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Arve Hjønnevåg <arve@...roid.com>,
Todd Kjos <tkjos@...roid.com>,
Martijn Coenen <maco@...roid.com>,
Joel Fernandes <joel@...lfernandes.org>,
Christian Brauner <brauner@...nel.org>,
Hridya Valsaraju <hridya@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Sumit Semwal <sumit.semwal@...aro.org>,
Christian König <christian.koenig@....com>,
Benjamin Gaignard <benjamin.gaignard@...aro.org>,
Liam Mark <lmark@...eaurora.org>,
Laura Abbott <labbott@...hat.com>,
Brian Starkey <Brian.Starkey@....com>,
John Stultz <john.stultz@...aro.org>,
Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
Johannes Weiner <hannes@...xchg.org>,
Shuah Khan <shuah@...nel.org>,
Kalesh Singh <kaleshsingh@...gle.com>, Kenny.Ho@....com,
dri-devel@...ts.freedesktop.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-media@...r.kernel.org,
linaro-mm-sig@...ts.linaro.org, cgroups@...r.kernel.org,
linux-kselftest@...r.kernel.org
Subject: Re: [RFC v3 1/8] gpu: rfc: Proposal for a GPU cgroup controller
On Tue, Mar 22, 2022 at 08:41:55AM -0700, "T.J. Mercier" <tjmercier@...gle.com> wrote:
> So "total" is used twice here in two different contexts.
> The first one is the global "GPU" cgroup context. As in any buffer
> that any exporter claims is a GPU buffer, regardless of where/how it
> is allocated. So this refers to the sum of all gpu buffers of any
> type/source. An exporter contributes to this total by registering a
> corresponding gpucg_device and making charges against that device when
> it exports.
> The second one is in a per device context. This allows us to make a
> distinction between different types of GPU memory based on who
> exported the buffer. A single process can make use of several
> different types of dma buffers (for example cached and uncached
> versions of the same type of memory), and it would be useful to have
> different limits for each. These are distinguished by the device name
> string chosen when the gpucg_device is first registered.
So is this understanding correct?
(if there was an analogous line in gpu.memory.current to gpu.memory.max)
$ cat gpu.memory.current
total T
dev1 d1
...
devN dn
T = Σ di + RAM_backed_buffers
and that some of RAM_backed_buffers may be accounted also in
memory.current (case by case, depending on allocator).
Thanks,
Michal
Powered by blists - more mailing lists