[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <be4e1cd8-a994-303d-9424-14439ce1f7d4@ideasonboard.com>
Date: Thu, 17 Feb 2022 15:03:02 +0200
From: Tomi Valkeinen <tomi.valkeinen@...asonboard.com>
To: Ivaylo Dimitrov <ivo.g.dimitrov.75@...il.com>, tomba@...nel.org,
airlied@...ux.ie, daniel@...ll.ch
Cc: dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
linux-omap@...r.kernel.org, merlijn@...zup.org, tony@...mide.com
Subject: Re: [PATCH 0/3] drm: omapdrm: Fix excessive GEM buffers DMM/CMA usage
Hi Ivaylo,
On 19/01/2022 12:23, Ivaylo Dimitrov wrote:
> This patch series fixes excessive DMM or CMA usage of GEM buffers leading to
> various runtime allocation failures. The series enables daily usage of devices
> without exausting limited resources like CMA or DMM space if GPU rendering is
> needed.
>
> The first patch doesn't bring any functional changes, it just moves some
> TILER/DMM related code to a separate function, to simplify the review of the
> next two patches.
>
> The second patch allows off-CPU rendering to non-scanout buffers. Without that
> patch, it is basically impossible to use the driver allocated GEM buffers on
> OMAP3 for anything else but a basic CPU rendered examples as if we want GPU
> rendering, we must allocate buffers as scanout buffers, which are CMA allocated.
> CMA soon gets fragmented and we start seeing allocation failures. Such failres
> in Xorg cannot be handeled gracefully, so the system is basically unusable.
>
> Third patch fixes similar issue on OMAP4/5, where DMM/TILER spaces get
> fragmented with time, leading to allocation failures.
I think this is just hacking around the problem. The problem is that
omapdrm is being used by some as a generic buffer allocator. Those users
should be changed to use a their own allocator or a generic allocator.
And we could then drop the OMAP_BO_SCANOUT flag, as all buffers would be
scanout buffers.
Or do we have a regression in the driver? My understanding is that this
has never really worked.
Tomi
Powered by blists - more mailing lists