lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 15 Dec 2022 09:30:45 +0100
From:   Stefano Garzarella <sgarzare@...hat.com>
To:     Paolo Abeni <pabeni@...hat.com>
Cc:     Bobby Eshleman <bobby.eshleman@...edance.com>,
        Bobby Eshleman <bobbyeshleman@...il.com>,
        Cong Wang <cong.wang@...edance.com>,
        Jiang Wang <jiang.wang@...edance.com>,
        Krasnov Arseniy <oxffffaa@...il.com>,
        Stefan Hajnoczi <stefanha@...hat.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Jason Wang <jasowang@...hat.com>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>, kvm@...r.kernel.org,
        virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next v7] virtio/vsock: replace virtio_vsock_pkt with
 sk_buff

On Wed, Dec 14, 2022 at 11:58:47AM +0100, Paolo Abeni wrote:
>On Tue, 2022-12-13 at 19:28 +0000, Bobby Eshleman wrote:
>> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
>> index 5703775af129..2a5994b029b2 100644
>> --- a/drivers/vhost/vsock.c
>> +++ b/drivers/vhost/vsock.c
>> @@ -51,8 +51,7 @@ struct vhost_vsock {
>>  	struct hlist_node hash;
>>
>>  	struct vhost_work send_pkt_work;
>> -	spinlock_t send_pkt_list_lock;
>> -	struct list_head send_pkt_list;	/* host->guest pending packets */
>> +	struct sk_buff_head send_pkt_queue; /* host->guest pending packets */
>>
>>  	atomic_t queued_replies;
>>
>> @@ -108,40 +107,33 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
>>  	vhost_disable_notify(&vsock->dev, vq);
>>
>>  	do {
>> -		struct virtio_vsock_pkt *pkt;
>> +		struct virtio_vsock_hdr *hdr;
>> +		size_t iov_len, payload_len;
>>  		struct iov_iter iov_iter;
>> +		u32 flags_to_restore = 0;
>> +		struct sk_buff *skb;
>>  		unsigned out, in;
>>  		size_t nbytes;
>> -		size_t iov_len, payload_len;
>>  		int head;
>> -		u32 flags_to_restore = 0;
>>
>> -		spin_lock_bh(&vsock->send_pkt_list_lock);
>> -		if (list_empty(&vsock->send_pkt_list)) {
>> -			spin_unlock_bh(&vsock->send_pkt_list_lock);
>> +		spin_lock(&vsock->send_pkt_queue.lock);
>> +		skb = __skb_dequeue(&vsock->send_pkt_queue);
>> +		spin_unlock(&vsock->send_pkt_queue.lock);
>
>Here you use a plain spin_lock(), but every other lock has the _bh()
>variant. A few lines above this functions acquires a mutex, so this is
>process context (and not BH context): I guess you should use _bh()
>here, too.

Maybe we can use directly the new virtio_vsock_skb_dequeue().
IIRC we added it exactly to use the same kind of lock everywhere.

Thanks,
Stefano

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ