lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 May 2018 14:41:29 -0300
From:   Ezequiel Garcia <ezequiel@...labora.com>
To:     Hans Verkuil <hverkuil@...all.nl>, linux-media@...r.kernel.org
Cc:     kernel@...labora.com,
        Mauro Carvalho Chehab <mchehab@....samsung.com>,
        Shuah Khan <shuahkh@....samsung.com>,
        Pawel Osciak <pawel@...iak.com>,
        Alexandre Courbot <acourbot@...omium.org>,
        Sakari Ailus <sakari.ailus@....fi>,
        Brian Starkey <brian.starkey@....com>,
        linux-kernel@...r.kernel.org,
        Gustavo Padovan <gustavo.padovan@...labora.com>
Subject: Re: [PATCH v10 12/16] vb2: add in-fence support to QBUF

On Tue, 2018-05-22 at 18:48 +0200, Hans Verkuil wrote:
> On 22/05/18 18:22, Ezequiel Garcia wrote:
> > > > @@ -1615,7 +1762,12 @@ static void __vb2_dqbuf(struct vb2_buffer *vb)
> > > >  		return;
> > > >  
> > > >  	vb->state = VB2_BUF_STATE_DEQUEUED;
> > > > -
> > > > +	if (vb->in_fence) {
> > > > +		if (dma_fence_remove_callback(vb->in_fence, &vb->fence_cb))
> > > > +			__vb2_buffer_put(vb);
> > > > +		dma_fence_put(vb->in_fence);
> > > > +		vb->in_fence = NULL;
> > > > +	}
> > > >  	/* unmap DMABUF buffer */
> > > >  	if (q->memory == VB2_MEMORY_DMABUF)
> > > >  		for (i = 0; i < vb->num_planes; ++i) {
> > > > @@ -1653,7 +1805,7 @@ int vb2_core_dqbuf(struct vb2_queue *q, unsigned int *pindex, void *pb,
> > > >  	if (pindex)
> > > >  		*pindex = vb->index;
> > > >  
> > > > -	/* Fill buffer information for the userspace */
> > > > +	/* Fill buffer information for userspace */
> > > >  	if (pb)
> > > >  		call_void_bufop(q, fill_user_buffer, vb, pb);
> > > >  
> > > > @@ -1700,8 +1852,8 @@ static void __vb2_queue_cancel(struct vb2_queue *q)
> > > >  	if (WARN_ON(atomic_read(&q->owned_by_drv_count))) {
> > > >  		for (i = 0; i < q->num_buffers; ++i)
> > > >  			if (q->bufs[i]->state == VB2_BUF_STATE_ACTIVE) {
> > > > -				pr_warn("driver bug: stop_streaming operation is leaving buf %p in active
> > > > state\n",
> > > > -					q->bufs[i]);
> > > > +				pr_warn("driver bug: stop_streaming operation is leaving buf[%d] 0x%p in active
> > > > state\n",
> > > > +					q->bufs[i]->index, q->bufs[i]);
> > > >  				vb2_buffer_done(q->bufs[i], VB2_BUF_STATE_ERROR);
> > > >  			}
> > > 
> > > Shouldn't any pending fences be canceled here?
> > > 
> > 
> > No, we don't have to flush -- that's the reason of the refcount :)
> > The qbuf_work won't do anything if all the buffers are returned
> > by the driver (with error or done state), and if !streaming.
> > 
> > Also, note that's why qbuf_work checks for the queued state, and not
> > for the error state.
> > 
> > > I feel uncomfortable with the refcounting of buffers, I'd rather that when we
> > > cancel the queue all fences for buffers are removed/canceled/whatever.
> > > 
> > > Is there any reason for refcounting if we cancel all pending fences here?
> > > 
> > > Note that besides canceling fences you also need to cancel/flush __qbuf_work.
> > > 
> > > 
> > 
> > Like I said above, I'm trying to avoid cancel/flushing the workqueue.
> > Currently, I believe it works fine without any flushing, provided we refcount
> > the buffers.
> > 
> > The problem with cancelling the workqueue, is that you need to unlock the queue
> > lock, to avoid a deadlock. It seemed to me that having a refcount is more natural.
> > 
> > Thoughts?
> > 
> 
> I'll take another look tomorrow morning. Do you have a public git tree containing
> this series that I can browse?
> 
> 

Sure, there you go http://git.infradead.org/users/ezequielg/linux/shortlog/refs/heads/fences_v10_v4.17-rc1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ