[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-KSek+LmZu0X0TXetmFEQ59iF3NpjZ4KugbwLo1BGfhaA@mail.gmail.com>
Date: Wed, 23 Aug 2017 23:28:24 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Koichiro Den <den@...ipeden.com>, Jason Wang <jasowang@...hat.com>,
virtualization@...ts.linux-foundation.org,
Network Development <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next] virtio-net: invoke zerocopy callback on xmit
path if no tx napi
>> > * as a generic solution, if we were to somehow overcome the safety issue, track
>> > the delay and do copy if some threshold is reached could be an answer, but it's
>> > hard for now.> * so things like the current vhost-net implementation of deciding whether or not
>> > to do zerocopy beforehand referring the zerocopy tx error ratio is a point of
>> > practical compromise.
>>
>> The fragility of this mechanism is another argument for switching to tx napi
>> as default.
>>
>> Is there any more data about the windows guest issues when completions
>> are not queued within a reasonable timeframe? What is this timescale and
>> do we really need to work around this.
>
> I think it's pretty large, many milliseconds.
>
> But I wonder what do you mean by "work around". Using buffers within
> limited time frame sounds like a reasonable requirement to me.
Vhost-net zerocopy delays completions until the skb is really
sent. Traffic shaping can introduce msec timescale latencies.
The delay may actually be a useful signal. If the guest does not
orphan skbs early, TSQ will throttle the socket causing host
queue build up.
But, if completions are queued in-order, unrelated flows may be
throttled as well. Allowing out of order completions would resolve
this HoL blocking.
> Neither
> do I see why would using tx interrupts within guest be a work around -
> AFAIK windows driver uses tx interrupts.
It does not address completion latency itself. What I meant was
that in an interrupt-driver model, additional starvation issues,
such as the potential deadlock raised at the start of this thread,
or the timer delay observed before packets were orphaned in
virtio-net in commit b0c39dbdc204, are mitigated.
Specifically, it breaks the potential deadlock where sockets are
blocked waiting for completions (to free up budget in sndbuf, tsq, ..),
yet completion handling is blocked waiting for a new packet to
trigger free_old_xmit_skbs from start_xmit.
>> That is the only thing keeping us from removing the HoL blocking in vhost-net zerocopy.
>
> We don't enable network watchdog on virtio but we could and maybe
> should.
Can you elaborate?
Powered by blists - more mailing lists