[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dca2f6ff-b586-461d-936d-e0b9edbe7642@amazon.com>
Date: Fri, 15 Nov 2024 16:49:14 +0100
From: Alexander Graf <graf@...zon.com>
To: Stefano Garzarella <sgarzare@...hat.com>
CC: <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<virtualization@...ts.linux.dev>, <kvm@...r.kernel.org>, Asias He
<asias@...hat.com>, "Michael S. Tsirkin" <mst@...hat.com>, Paolo Abeni
<pabeni@...hat.com>, Jakub Kicinski <kuba@...nel.org>, Eric Dumazet
<edumazet@...gle.com>, "David S. Miller" <davem@...emloft.net>, "Stefan
Hajnoczi" <stefanha@...hat.com>
Subject: Re: [PATCH] vsock/virtio: Remove queued_replies pushback logic
Hi Stefano,
On 15.11.24 12:59, Stefano Garzarella wrote:
>
> On Fri, Nov 15, 2024 at 10:30:16AM +0000, Alexander Graf wrote:
>> Ever since the introduction of the virtio vsock driver, it included
>> pushback logic that blocks it from taking any new RX packets until the
>> TX queue backlog becomes shallower than the virtqueue size.
>>
>> This logic works fine when you connect a user space application on the
>> hypervisor with a virtio-vsock target, because the guest will stop
>> receiving data until the host pulled all outstanding data from the VM.
>
> So, why not skipping this only when talking with a sibling VM?
I don't think there is a way to know, is there?
>
>>
>> With Nitro Enclaves however, we connect 2 VMs directly via vsock:
>>
>> Parent Enclave
>>
>> RX -------- TX
>> TX -------- RX
>>
>> This means we now have 2 virtio-vsock backends that both have the
>> pushback
>> logic. If the parent's TX queue runs full at the same time as the
>> Enclave's, both virtio-vsock drivers fall into the pushback path and
>> no longer accept RX traffic. However, that RX traffic is TX traffic on
>> the other side which blocks that driver from making any forward
>> progress. We're not in a deadlock.
>>
>> To resolve this, let's remove that pushback logic altogether and rely on
>> higher levels (like credits) to ensure we do not consume unbounded
>> memory.
>
> I spoke quickly with Stefan who has been following the development from
> the beginning and actually pointed out that there might be problems
> with the control packets, since credits only covers data packets, so
> it doesn't seem like a good idea remove this mechanism completely.
Can you help me understand which situations the current mechanism really
helps with, so we can look at alternatives?
>
>>
>> Fixes: 0ea9e1d3a9e3 ("VSOCK: Introduce virtio_transport.ko")
>
> I'm not sure we should add this Fixes tag, this seems very risky
> backporting on stable branches IMHO.
Which situations do you believe it will genuinely break anything in? As
it stands today, if you run upstream parent and enclave and hammer them
with vsock traffic, you get into a deadlock. Even without the flow
control, you will never hit a deadlock. But you may get a brown-out like
situation while Linux is flushing its buffers.
Ideally we want to have actual flow control to mitigate the problem
altogether. But I'm not quite sure how and where. Just blocking all
receiving traffic causes problems.
> If we cannot find a better mechanism to replace this with something
> that works both guest <-> host and guest <-> guest, I would prefer
> to do this just for guest <-> guest communication.
> Because removing this completely seems too risky for me, at least
> without a proof that control packets are fine.
So your concern is that control packets would not receive pushback, so
we would allow unbounded traffic to get queued up? Can you suggest
options to help with that?
Alex
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
Powered by blists - more mailing lists