[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGxU2F4ca5pxW3RX4wzsTx3KRBtxLK_rO9KxPgUtqcaSNsqXCA@mail.gmail.com>
Date: Mon, 19 Dec 2022 16:41:23 +0100
From: Stefano Garzarella <sgarzare@...hat.com>
To: Arseniy Krasnov <AVKrasnov@...rdevices.ru>
Cc: Stefan Hajnoczi <stefanha@...hat.com>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
kernel <kernel@...rdevices.ru>,
Krasnov Arseniy <oxffffaa@...il.com>,
Arseniy Krasnov <AVKrasnov@...rdevices.ru>
Subject: Re: [RFC PATCH v1 0/2] virtio/vsock: fix mutual rx/tx hungup
Hi Arseniy,
On Sat, Dec 17, 2022 at 8:42 PM Arseniy Krasnov <AVKrasnov@...rdevices.ru> wrote:
>
> Hello,
>
> seems I found strange thing(may be a bug) where sender('tx' later) and
> receiver('rx' later) could stuck forever. Potential fix is in the first
> patch, second patch contains reproducer, based on vsock test suite.
> Reproducer is simple: tx just sends data to rx by 'write() syscall, rx
> dequeues it using 'read()' syscall and uses 'poll()' for waiting. I run
> server in host and client in guest.
>
> rx side params:
> 1) SO_VM_SOCKETS_BUFFER_SIZE is 256Kb(e.g. default).
> 2) SO_RCVLOWAT is 128Kb.
>
> What happens in the reproducer step by step:
>
I put the values of the variables involved to facilitate understanding:
RX: buf_alloc = 256 KB; fwd_cnt = 0; last_fwd_cnt = 0;
free_space = buf_alloc - (fwd_cnt - last_fwd_cnt) = 256 KB
The credit update is sent if
free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE [64 KB]
> 1) tx tries to send 256Kb + 1 byte (in a single 'write()')
> 2) tx sends 256Kb, data reaches rx (rx_bytes == 256Kb)
> 3) tx waits for space in 'write()' to send last 1 byte
> 4) rx does poll(), (rx_bytes >= rcvlowat) 256Kb >= 128Kb, POLLIN is set
> 5) rx reads 64Kb, credit update is not sent due to *
RX: buf_alloc = 256 KB; fwd_cnt = 64 KB; last_fwd_cnt = 0;
free_space = 192 KB
> 6) rx does poll(), (rx_bytes >= rcvlowat) 192Kb >= 128Kb, POLLIN is set
> 7) rx reads 64Kb, credit update is not sent due to *
RX: buf_alloc = 256 KB; fwd_cnt = 128 KB; last_fwd_cnt = 0;
free_space = 128 KB
> 8) rx does poll(), (rx_bytes >= rcvlowat) 128Kb >= 128Kb, POLLIN is set
> 9) rx reads 64Kb, credit update is not sent due to *
Right, (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) is still false.
RX: buf_alloc = 256 KB; fwd_cnt = 196 KB; last_fwd_cnt = 0;
free_space = 64 KB
> 10) rx does poll(), (rx_bytes < rcvlowat) 64Kb < 128Kb, rx waits in poll()
I agree that the TX is stuck because we are not sending the credit
update, but also if RX sends the credit update at step 9, RX won't be
woken up at step 10, right?
>
> * is optimization in 'virtio_transport_stream_do_dequeue()' which
> sends OP_CREDIT_UPDATE only when we have not too much space -
> less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE.
>
> Now tx side waits for space inside write() and rx waits in poll() for
> 'rx_bytes' to reach SO_RCVLOWAT value. Both sides will wait forever. I
> think, possible fix is to send credit update not only when we have too
> small space, but also when number of bytes in receive queue is smaller
> than SO_RCVLOWAT thus not enough to wake up sleeping reader. I'm not
> sure about correctness of this idea, but anyway - I think that problem
> above exists. What do You think?
I'm not sure, I have to think more about it, but if RX reads less than
SO_RCVLOWAT, I expect it's normal to get to a case of stuck.
In this case we are only unstucking TX, but even if it sends that single
byte, RX is still stuck and not consuming it, so it was useless to wake
up TX if RX won't consume it anyway, right?
If RX woke up (e.g. SO_RCVLOWAT = 64KB) and read the remaining 64KB,
then it would still send the credit update even without this patch and
TX will send the 1 byte.
Thanks,
Stefano
Powered by blists - more mailing lists