lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180717114215.GA14414@nautica>
Date:   Tue, 17 Jul 2018 13:42:15 +0200
From:   Dominique Martinet <asmadeus@...ewreck.org>
To:     jiangyiwen <jiangyiwen@...wei.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Eric Van Hensbergen <ericvh@...il.com>,
        Ron Minnich <rminnich@...dia.gov>,
        Latchesar Ionkov <lucho@...kov.net>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        v9fs-developer@...ts.sourceforge.net
Subject: Re: [V9fs-developer] [PATCH v2] net/9p: Fix a deadlock case in the
 virtio transport


> Subject: net/9p: Fix a deadlock case in the virtio transport

I hadn't noticed in the v1, but how is that a deadlock fix?
The previous code doesn't look like it deadlocks to me, the commit
message is more correct.

jiangyiwen wrote on Tue, Jul 17, 2018:
> When client has multiple threads that issue io requests
> all the time, and the server has a very good performance,
> it may cause cpu is running in the irq context for a long
> time because it can check virtqueue has buf in the *while*
> loop.
> 
> So we should keep chan->lock in the whole loop.
> 
> Signed-off-by: Yiwen Jiang <jiangyiwen@...wei.com>
> ---
>  net/9p/trans_virtio.c | 17 ++++++-----------
>  1 file changed, 6 insertions(+), 11 deletions(-)
> 
> diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
> index 05006cb..e5fea8b 100644
> --- a/net/9p/trans_virtio.c
> +++ b/net/9p/trans_virtio.c
> @@ -148,20 +148,15 @@ static void req_done(struct virtqueue *vq)
> 
>  	p9_debug(P9_DEBUG_TRANS, ": request done\n");
> 
> -	while (1) {
> -		spin_lock_irqsave(&chan->lock, flags);
> -		req = virtqueue_get_buf(chan->vq, &len);
> -		if (req == NULL) {
> -			spin_unlock_irqrestore(&chan->lock, flags);
> -			break;
> -		}
> -		chan->ring_bufs_avail = 1;
> -		spin_unlock_irqrestore(&chan->lock, flags);
> -		/* Wakeup if anyone waiting for VirtIO ring space. */
> -		wake_up(chan->vc_wq);
> +	spin_lock_irqsave(&chan->lock, flags);
> +	while ((req = virtqueue_get_buf(chan->vq, &len)) != NULL) {
>  		if (len)
>  			p9_client_cb(chan->client, req, REQ_STATUS_RCVD);
>  	}
> +	chan->ring_bufs_avail = 1;

Do we have a guarantee that req_done is only called if there is at least
one buf to read?
For example, that there isn't two threads queueing the same callback but
the first one reads everything and the second has nothing to read?

If virtblk_done takes care of setting up a "req_done" bool to only
notify waiters if something has been done I'd rather have a reason to do
differently, even if you can argue that nothing bad will happen in case
of a gratuitous wake_up

> +	spin_unlock_irqrestore(&chan->lock, flags);
> +	/* Wakeup if anyone waiting for VirtIO ring space. */
> +	wake_up(chan->vc_wq);
>  }

Thanks,
-- 
Dominique

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ