[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46ba9b55-c6ff-925c-d51a-8da9d1abd2f2@sberdevices.ru>
Date: Fri, 24 Mar 2023 12:06:54 +0300
From: Arseniy Krasnov <avkrasnov@...rdevices.ru>
To: Stefano Garzarella <sgarzare@...hat.com>,
syzbot <syzbot+befff0a9536049e7902e@...kaller.appspotmail.com>,
Bobby Eshleman <bobby.eshleman@...edance.com>,
Bobby Eshleman <bobby.eshleman@...il.com>
CC: <davem@...emloft.net>, <edumazet@...gle.com>, <kuba@...nel.org>,
<kvm@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<netdev@...r.kernel.org>, <pabeni@...hat.com>,
<stefanha@...hat.com>, <syzkaller-bugs@...glegroups.com>,
<virtualization@...ts.linux-foundation.org>,
Krasnov Arseniy <oxffffaa@...il.com>
Subject: Re: [syzbot] [kvm?] [net?] [virt?] general protection fault in
virtio_transport_purge_skbs
On 24.03.2023 12:06, Stefano Garzarella wrote:
> On Fri, Mar 24, 2023 at 9:55 AM Stefano Garzarella <sgarzare@...hat.com> wrote:
>>
>> On Fri, Mar 24, 2023 at 9:31 AM Stefano Garzarella <sgarzare@...hat.com> wrote:
>>>
>>> Hi Bobby,
>>> can you take a look at this report?
>>>
>>> It seems related to the changes we made to support skbuff.
>>
>> Could it be a problem of concurrent access to pkt_queue ?
>>
>> IIUC we should hold pkt_queue.lock when we call skb_queue_splice_init()
>> and remove pkt_list_lock. (or hold pkt_list_lock when calling
>> virtio_transport_purge_skbs, but pkt_list_lock seems useless now that
>> we use skbuff)
>>
>
> In the previous patch was missing a hunk, new one attached:
>
> #syz test https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git fff5a5e7f528
>
> --- a/net/vmw_vsock/vsock_loopback.c
> +++ b/net/vmw_vsock/vsock_loopback.c
> @@ -15,7 +15,6 @@
> struct vsock_loopback {
> struct workqueue_struct *workqueue;
>
> - spinlock_t pkt_list_lock; /* protects pkt_list */
> struct sk_buff_head pkt_queue;
> struct work_struct pkt_work;
> };
> @@ -32,9 +31,7 @@ static int vsock_loopback_send_pkt(struct sk_buff *skb)
> struct vsock_loopback *vsock = &the_vsock_loopback;
> int len = skb->len;
>
> - spin_lock_bh(&vsock->pkt_list_lock);
> skb_queue_tail(&vsock->pkt_queue, skb);
Hello Stefano and Bobby,
Small remark, may be here we can use virtio_vsock_skb_queue_tail() instead of skb_queue_tail().
skb_queue_tail() disables irqs during spinlock access, while virtio_vsock_skb_queue_tail()
uses spin_lock_bh(). vhost and virtio transports use virtio_vsock_skb_queue_tail().
Thanks, Arseniy
> - spin_unlock_bh(&vsock->pkt_list_lock);
>
> queue_work(vsock->workqueue, &vsock->pkt_work);
>
> @@ -113,9 +110,9 @@ static void vsock_loopback_work(struct work_struct *work)
>
> skb_queue_head_init(&pkts);
>
> - spin_lock_bh(&vsock->pkt_list_lock);
> + spin_lock_bh(&vsock->pkt_queue.lock);
> skb_queue_splice_init(&vsock->pkt_queue, &pkts);
> - spin_unlock_bh(&vsock->pkt_list_lock);
> + spin_unlock_bh(&vsock->pkt_queue.lock);
>
> while ((skb = __skb_dequeue(&pkts))) {
> virtio_transport_deliver_tap_pkt(skb);
> @@ -132,7 +129,6 @@ static int __init vsock_loopback_init(void)
> if (!vsock->workqueue)
> return -ENOMEM;
>
> - spin_lock_init(&vsock->pkt_list_lock);
> skb_queue_head_init(&vsock->pkt_queue);
> INIT_WORK(&vsock->pkt_work, vsock_loopback_work);
>
> @@ -156,9 +152,7 @@ static void __exit vsock_loopback_exit(void)
>
> flush_work(&vsock->pkt_work);
>
> - spin_lock_bh(&vsock->pkt_list_lock);
> virtio_vsock_skb_queue_purge(&vsock->pkt_queue);
> - spin_unlock_bh(&vsock->pkt_list_lock);
>
> destroy_workqueue(vsock->workqueue);
> }
>
Powered by blists - more mailing lists