[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <rnehgb4kcntzebpzgpofhavo2la5eqjek3ej4gjm6tl5fb55wp@l4vroereu5ws>
Date: Wed, 2 Oct 2024 18:42:56 +0200
From: Stefano Garzarella <sgarzare@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: linux-kernel@...r.kernel.org, Christian Brauner <brauner@...nel.org>,
Luigi Leonardi <luigi.leonardi@...look.com>, Jason Wang <jasowang@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>, Eugenio PĂ©rez <eperezma@...hat.com>,
Stefan Hajnoczi <stefanha@...hat.com>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Marco Pinna <marco.pinn95@...il.com>,
virtualization@...ts.linux.dev, kvm@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH] vsock/virtio: use GFP_ATOMIC under RCU read lock
On Wed, Oct 02, 2024 at 04:02:06PM GMT, Stefano Garzarella wrote:
>On Wed, Oct 02, 2024 at 09:41:42AM GMT, Michael S. Tsirkin wrote:
>>virtio_transport_send_pkt in now called on transport fast path,
>>under RCU read lock. In that case, we have a bug: virtio_add_sgs
>>is called with GFP_KERNEL, and might sleep.
>>
>>Pass the gfp flags as an argument, and use GFP_ATOMIC on
>>the fast path.
>>
>>Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x
>>Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty")
>>Reported-by: Christian Brauner <brauner@...nel.org>
>>Cc: Stefano Garzarella <sgarzare@...hat.com>
>>Cc: Luigi Leonardi <luigi.leonardi@...look.com>
>>Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
>>---
>>
>>Lightly tested. Christian, could you pls confirm this fixes the problem
>>for you? Stefano, it's a holiday here - could you pls help test!
>
>Sure, thanks for the quick fix! I was thinking something similar ;-)
>
>>Thanks!
>>
>>
>>net/vmw_vsock/virtio_transport.c | 8 ++++----
>>1 file changed, 4 insertions(+), 4 deletions(-)
>>
>>diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>>index f992f9a216f0..0cd965f24609 100644
>>--- a/net/vmw_vsock/virtio_transport.c
>>+++ b/net/vmw_vsock/virtio_transport.c
>>@@ -96,7 +96,7 @@ static u32 virtio_transport_get_local_cid(void)
>>
>>/* Caller need to hold vsock->tx_lock on vq */
>>static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq,
>>- struct virtio_vsock *vsock)
>>+ struct virtio_vsock *vsock, gfp_t gfp)
>>{
>> int ret, in_sg = 0, out_sg = 0;
>> struct scatterlist **sgs;
>>@@ -140,7 +140,7 @@ static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq,
>> }
>> }
>>
>>- ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, GFP_KERNEL);
>>+ ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, gfp);
>> /* Usually this means that there is no more space available in
>> * the vq
>> */
>>@@ -178,7 +178,7 @@ virtio_transport_send_pkt_work(struct work_struct *work)
>>
>> reply = virtio_vsock_skb_reply(skb);
>>
>>- ret = virtio_transport_send_skb(skb, vq, vsock);
>>+ ret = virtio_transport_send_skb(skb, vq, vsock, GFP_KERNEL);
>> if (ret < 0) {
>> virtio_vsock_skb_queue_head(&vsock->send_pkt_queue,
>> skb);
>> break;
>>@@ -221,7 +221,7 @@ static int virtio_transport_send_skb_fast_path(struct virtio_vsock *vsock, struc
>> if (unlikely(ret == 0))
>> return -EBUSY;
>>
>>- ret = virtio_transport_send_skb(skb, vq, vsock);
>
>nit: maybe we can add a comment here:
> /* GFP_ATOMIC because we are in RCU section, so we can't sleep */
>>+ ret = virtio_transport_send_skb(skb, vq, vsock, GFP_ATOMIC);
>> if (ret == 0)
>> virtqueue_kick(vq);
>>
>>--
>>MST
>>
>
>I'll run some tests and come back with R-b when it's done.
I replicated the issue enabling CONFIG_DEBUG_ATOMIC_SLEEP.
With that enabled, as soon as I run iperf-vsock, dmesg is flooded with
those messages. With this patch applied instead everything is fine.
I also ran the usual tests with various debugging options enabled and
everything seems okay.
With or without adding the comment I suggested in the previous email:
Reviewed-by: Stefano Garzarella <sgarzare@...hat.com>
Powered by blists - more mailing lists