lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 27 Jul 2016 07:18:20 -0400
From:	Rob Clark <robdclark@...il.com>
To:	Eric Anholt <eric@...olt.net>
Cc:	"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	stable <stable@...r.kernel.org>
Subject: Re: [PATCH 5/6] drm/vc4: Fix overflow mem unreferencing when the
 binner runs dry.

On Wed, Jul 27, 2016 at 1:37 AM, Eric Anholt <eric@...olt.net> wrote:
> Rob Clark <robdclark@...il.com> writes:
>
>> On Tue, Jul 26, 2016 at 7:11 PM, Eric Anholt <eric@...olt.net> wrote:
>>> Rob Clark <robdclark@...il.com> writes:
>>>
>>>> On Tue, Jul 26, 2016 at 4:47 PM, Eric Anholt <eric@...olt.net> wrote:
>>>>> Overflow memory handling is tricky: While it's still referenced by the
>>>>> BPO registers, we want to keep it from being freed.  When we are
>>>>> putting a new set of overflow memory in the registers, we need to
>>>>> assign the old one to the last rendering job using it.
>>>>>
>>>>> We were looking at "what's currently running in the binner", but since
>>>>> the bin/render submission split, we may end up with the binner
>>>>> completing and having no new job while the renderer is still
>>>>> processing.  So, if we don't find a bin job at all, look at the
>>>>> highest-seqno (last) render job to attach our overflow to.
>>>>
>>>> so, drive-by comment.. but can you allocate gem bo's without backing
>>>> them immediately with pages?  If so, just always allocate the bo
>>>> up-front and attach it as a dependency of the batch, and only pin it
>>>> to actual pages when you have to overflow?
>>>
>>> The amount of overflow for a given CL is arbitrary, depending on the
>>> geometry submitted, and the overflow pool just gets streamed into by the
>>> hardware as you submit bin jobs.  You'll end up allocating [0,n] new
>>> overflows per bin job.  I don't see where "allocate gem BOs without
>>> backing them immediately with pages" idea would fit into this.
>>
>> well, even not knowing the size up front shouldn't really be a
>> show-stopper, unless you had to mmap it to userspace, perhaps..
>> normally backing pages aren't allocated until drm_gem_get_pages() so
>> allocating the gem bo as placeholder to track dependencies of the
>> batch/submit shouldn't be an issue.  But I noticed you don't use
>> drm_gem_get_pages().. maybe w/ cma helpers it is harder to decouple
>> allocation of the drm_gem_object from the backing store.
>
> There's no period of time between "I need to allocate an overflow BO"
> and "I need pages in the BO", though.

oh, ok, so this is some memory that is already being used by the GPU,
not something that starts to be used when you hit overflow condition..
I'd assumed it was something you were allocating in response to the
overflow irq, but looks like you are actually *re*allocating.

BR,
-R

> I could have a different setup that allocated a massive (all of CMA?),
> fresh overflow BO per CL and populated page ranges in it as I overflow,
> but with CMA you really need to never do new allocations in the hot path
> because you get to stop and wait approximately forever.  So you'd want
> to chunk it up so you could cache the groups of contiguous pages of
> overflow, and it turns out we already have a thing for this in the form
> of GEM BOs.  Anyway, doing that that means you're losing out on the rest
> of the last overflow BO for the new CL, expanding the working set in
> your precious 256MB CMA area.
>
> Well, OK, actually I *do* allocate a fresh overflow BO per CL today,
> because of leftover bringup code that I think I could just delete at
> this point.  I'm not doing that in a -fixes commit, though.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ