[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7c0691dd62f58cffb42fbfb32eedc742038a2de0.camel@vmware.com>
Date: Fri, 24 Feb 2023 03:38:03 +0000
From: Zack Rusin <zackr@...are.com>
To: "tangmeng@...ontech.com" <tangmeng@...ontech.com>,
"daniel@...ll.ch" <daniel@...ll.ch>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
Linux-graphics-maintainer <Linux-graphics-maintainer@...are.com>,
"airlied@...il.com" <airlied@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] drm/vmwgfx: Work around VMW_ALLOC_DMABUF
On Fri, 2023-02-24 at 11:29 +0800, Meng Tang wrote:
>
>
> On 2023/2/24 11:13, Zack Rusin wrote:
> >
> > That's correct. That's the way this works. The ioctl is allocating a buffer,
> > there's
> > no infinite space for buffers on a system and, given that your app just
> > allocates
> > and never frees buffers, at some point the space will run out and the ioctl will
> > return a failure.
> >
> Do you mean that users without certain privileges can access allocate a
> buffer because it is designed like this? so we don't need to block
> users without certain privileges to VMW_ALLOC_DMABUF success?
That's correct. If only the drm master or admins could use rendering none of the
regular accelerated (e.g. OpenGL) apps would work.
> > As to the stack trace, I'm not sure what kernel you were testing it on so I
> > don't
> > have access to the full log but I can't reproduce it and there was a change
> > fixing
> > exactly this (i.e. buffer failed allocation but we were still accessing it) that
> > was
> > fixed in in 6.2 in commit 1a6897921f52 ("drm/vmwgfx: Stop accessing buffer
> > objects
> > which failed init") the change was backported as well, so you should be able to
> > verify on any kernel with it.
> >
> > z
> >
> Thank you, the kernel version of my environment is lower than 6.2, I
> will verify on my kernel with commit 1a6897921f52 ("drm/vmwgfx: Stop
> accessing buffer objects which failed init").
Great. Let me know if you have any problems with it.
z
Powered by blists - more mailing lists