[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190704112534.v7icsuverf7wrbjq@sirius.home.kraxel.org>
Date: Thu, 4 Jul 2019 13:25:34 +0200
From: Gerd Hoffmann <kraxel@...hat.com>
To: Chia-I Wu <olvaffe@...il.com>
Cc: ML dri-devel <dri-devel@...ts.freedesktop.org>,
Gurchetan Singh <gurchetansingh@...omium.org>,
David Airlie <airlied@...ux.ie>,
Daniel Vetter <daniel@...ll.ch>,
"open list:VIRTIO GPU DRIVER"
<virtualization@...ts.linux-foundation.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v6 08/18] drm/virtio: rework virtio_gpu_execbuffer_ioctl
fencing
Hi,
> > if (fence)
> > virtio_gpu_fence_emit(vgdev, hdr, fence);
> > + if (vbuf->objs) {
> > + virtio_gpu_array_add_fence(vbuf->objs, &fence->f);
> > + virtio_gpu_array_unlock_resv(vbuf->objs);
> > + }
> This is with the spinlock held. Maybe we should move the
> virtio_gpu_array_unlock_resv call out of the critical section.
That would bring back the race ...
> I am actually more concerned about virtio_gpu_array_add_fence, but it
> is also harder to move. Should we add a kref to the object array?
Yep, refcounting would be the other way to fix the race.
> This bothers me because I recently ran into a CPU-bound game with very
> bad lock contention here.
Hmm. Any clue where this comes from? Multiple threads competing for
virtio buffers I guess? Maybe we should have larger virtqueues?
cheers,
Gerd
Powered by blists - more mailing lists