lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 20 Jan 2021 13:37:57 +0100
From:   Daniel Vetter <daniel@...ll.ch>
To:     Yiwei Zhang <zzyiwei@...roid.com>
Cc:     David Airlie <airlied@...ux.ie>, Gerd Hoffmann <kraxel@...hat.com>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        "open list:VIRTIO CORE, NET..." 
        <virtualization@...ts.linux-foundation.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Android Kernel Team <kernel-team@...roid.com>
Subject: Re: [PATCH v2] drm/virtio: Track total GPU memory for virtio driver

On Wed, Jan 20, 2021 at 10:51 AM Yiwei Zhangā€ˇ <zzyiwei@...roid.com> wrote:
>
> On Wed, Jan 20, 2021 at 1:11 AM Daniel Vetter <daniel@...ll.ch> wrote:
> >
> > On Tue, Jan 19, 2021 at 11:08:12AM -0800, Yiwei Zhang wrote:
> > > On Mon, Jan 18, 2021 at 11:03 PM Daniel Vetter <daniel@...ll.ch> wrote:
> > > >
> > > > On Tue, Jan 19, 2021 at 12:41 AM Yiwei Zhang <zzyiwei@...roid.com> wrote:
> > > > >
> > > > > On the success of virtio_gpu_object_create, add size of newly allocated
> > > > > bo to the tracled total_mem. In drm_gem_object_funcs.free, after the gem
> > > > > bo lost its last refcount, subtract the bo size from the tracked
> > > > > total_mem if the original underlying memory allocation is successful.
> > > > >
> > > > > Signed-off-by: Yiwei Zhang <zzyiwei@...roid.com>
> > > >
> > > > Isn't this something that ideally we'd for everyone? Also tracepoint
> > > > for showing the total feels like tracepoint abuse, usually we show
> > > > totals somewhere in debugfs or similar, and tracepoint just for what's
> > > > happening (i.e. which object got deleted/created).
> > > >
> > > > What is this for exactly?
> > > > -Daniel
> > > >
> > > > > ---
> > > > >  drivers/gpu/drm/virtio/Kconfig          |  1 +
> > > > >  drivers/gpu/drm/virtio/virtgpu_drv.h    |  4 ++++
> > > > >  drivers/gpu/drm/virtio/virtgpu_object.c | 19 +++++++++++++++++++
> > > > >  3 files changed, 24 insertions(+)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/virtio/Kconfig b/drivers/gpu/drm/virtio/Kconfig
> > > > > index b925b8b1da16..e103b7e883b1 100644
> > > > > --- a/drivers/gpu/drm/virtio/Kconfig
> > > > > +++ b/drivers/gpu/drm/virtio/Kconfig
> > > > > @@ -5,6 +5,7 @@ config DRM_VIRTIO_GPU
> > > > >         select DRM_KMS_HELPER
> > > > >         select DRM_GEM_SHMEM_HELPER
> > > > >         select VIRTIO_DMA_SHARED_BUFFER
> > > > > +       select TRACE_GPU_MEM
> > > > >         help
> > > > >            This is the virtual GPU driver for virtio.  It can be used with
> > > > >            QEMU based VMMs (like KVM or Xen).
> > > > > diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
> > > > > index 6a232553c99b..7c60e7486bc4 100644
> > > > > --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> > > > > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> > > > > @@ -249,6 +249,10 @@ struct virtio_gpu_device {
> > > > >         spinlock_t resource_export_lock;
> > > > >         /* protects map state and host_visible_mm */
> > > > >         spinlock_t host_visible_lock;
> > > > > +
> > > > > +#ifdef CONFIG_TRACE_GPU_MEM
> > > > > +       atomic64_t total_mem;
> > > > > +#endif
> > > > >  };
> > > > >
> > > > >  struct virtio_gpu_fpriv {
> > > > > diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
> > > > > index d69a5b6da553..1e16226cebbe 100644
> > > > > --- a/drivers/gpu/drm/virtio/virtgpu_object.c
> > > > > +++ b/drivers/gpu/drm/virtio/virtgpu_object.c
> > > > > @@ -25,12 +25,29 @@
> > > > >
> > > > >  #include <linux/dma-mapping.h>
> > > > >  #include <linux/moduleparam.h>
> > > > > +#ifdef CONFIG_TRACE_GPU_MEM
> > > > > +#include <trace/events/gpu_mem.h>
> > > > > +#endif
> > > > >
> > > > >  #include "virtgpu_drv.h"
> > > > >
> > > > >  static int virtio_gpu_virglrenderer_workaround = 1;
> > > > >  module_param_named(virglhack, virtio_gpu_virglrenderer_workaround, int, 0400);
> > > > >
> > > > > +#ifdef CONFIG_TRACE_GPU_MEM
> > > > > +static inline void virtio_gpu_trace_total_mem(struct virtio_gpu_device *vgdev,
> > > > > +                                             s64 delta)
> > > > > +{
> > > > > +       u64 total_mem = atomic64_add_return(delta, &vgdev->total_mem);
> > > > > +
> > > > > +       trace_gpu_mem_total(0, 0, total_mem);
> > > > > +}
> > > > > +#else
> > > > > +static inline void virtio_gpu_trace_total_mem(struct virtio_gpu_device *, s64)
> > > > > +{
> > > > > +}
> > > > > +#endif
> > > > > +
> > > > >  int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev, uint32_t *resid)
> > > > >  {
> > > > >         if (virtio_gpu_virglrenderer_workaround) {
> > > > > @@ -104,6 +121,7 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj)
> > > > >         struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
> > > > >
> > > > >         if (bo->created) {
> > > > > +               virtio_gpu_trace_total_mem(vgdev, -(obj->size));
> > > > >                 virtio_gpu_cmd_unref_resource(vgdev, bo);
> > > > >                 virtio_gpu_notify(vgdev);
> > > > >                 /* completion handler calls virtio_gpu_cleanup_object() */
> > > > > @@ -265,6 +283,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
> > > > >                 virtio_gpu_object_attach(vgdev, bo, ents, nents);
> > > > >         }
> > > > >
> > > > > +       virtio_gpu_trace_total_mem(vgdev, shmem_obj->base.size);
> > > > >         *bo_ptr = bo;
> > > > >         return 0;
> > > > >
> > > > > --
> > > > > 2.30.0.284.gd98b1dd5eaa7-goog
> > > > >
> > > >
> > > >
> > > > --
> > > > Daniel Vetter
> > > > Software Engineer, Intel Corporation
> > > > http://blog.ffwll.ch
> > >
> > > Thanks for your reply! Android Cuttlefish virtual platform is using
> > > the virtio-gpu driver, and we currently are carrying this small patch
> > > at the downstream side. This is essential for us because:
> > > (1) Android has deprecated debugfs on production devices already
> > > (2) Android GPU drivers are not DRM based, and this won't change in a
> > > short term.
> > >
> > > Android relies on this tracepoint + eBPF to make the GPU memory totals
> > > available at runtime on production devices, which has been enforced
> > > already. Not only game developers can have a reliable kernel total GPU
> > > memory to look at, but also Android leverages this to take GPU memory
> > > usage out from the system lost ram.
> > >
> > > I'm not sure whether the other DRM drivers would like to integrate
> > > this tracepoint(maybe upstream drivers will move away from debugfs
> > > later as well?), but at least we hope virtio-gpu can take this.
> >
> > There's already another proposal from Android people for tracking dma-buf
> > (in dma-buf heaps/ion) usage. I think we need something which is overall
> > integrated, otherwise we have a complete mess of partial solutions.
> >
> > Also there's work going on to add cgroups support to gpu drivers (pushed
> > by amd and intel folks, latest rfc have been quite old), so that's another
> > proposal for gpu memory usage tracking.
> >
> > Also for upstream we need something which works with upstream gpu drivers
> > (even if you don't end up using that in shipping products). So that's
> > another reason maybe why a quick hack in the virtio gpu driver isn't the
> > best approach here.
> >
> > I guess a good approach would be if Android at least can get to something
> > unified (gpu driver, virtio-gpu, dma-buf heaps), and then we need to
> > figure out how to mesh that with the cgroups side somehow.
> >
> > Also note that at least on dma-buf we already have some other debug
> > features (for android), so an overall "how does this all fit together"
> > would be good.
> > -Daniel
> >
> > >
> > > Many thanks!
> > > Yiwei
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
>
> The entire story is to better explain Android system memory usage.
> They fit together so that the dma-bufs overlap can be removed.
>
> Android GPU vendors have integrated this tracepoint to track gpu
> memory usage total(mapped into the gpu address space), which consists
> of below:
> (1) directly allocated via physical page allocator
> (2) imported external memory backed by dma-bufs
> (3) allocated exportable memory backed by dma-bufs
>
> Our Android kernel team is leading the other side of effort to help
> remove the dma-bufs overlap(those mapped into a gpu device) as a joint
> effort, so that we can accurately explain the memory usage of the
> entire Android system.
>
> For virtio-gpu, since that's used by our reference platform
> Cuttlefish(Cloud Android), we have to integrate the same tracepoint as
> well to enforce the use of this tracepoint and the eBPF stuff built on
> top to support runtime query of gpu memory on production devices. For
> virtio-gpu at this moment, we only want to track GEM allocations since
> PRIME import is currently not supported/used in Cuttlefish. That's all
> we are doing in this small patch.

Ok if the plan is to have that as a hard requirement for android
across all android uapi drivers, then
- this needs to be done across all upstream drivers too (otherwise we
don't have that uapi)
- usual open source requirements for new uapi (but I don't think that
should be a problem, these parts of android are all open I think)
- figuring out the overlap with the dma-buf account, before we merge either

Otherwise I don't see how this can work and be backed with upstreams
"never break uapi" guarantee.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ