[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250501094427-mutt-send-email-mst@kernel.org>
Date: Thu, 1 May 2025 09:44:38 -0400
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Jon Kohler <jon@...anix.com>
Cc: Jason Wang <jasowang@...hat.com>,
Eugenio Pérez <eperezma@...hat.com>,
kvm@...r.kernel.org, virtualization@...ts.linux.dev,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next v3] vhost/net: Defer TX queue re-enable until
after sendmsg
On Wed, Apr 30, 2025 at 07:04:28PM -0700, Jon Kohler wrote:
> In handle_tx_copy, TX batching processes packets below ~PAGE_SIZE and
> batches up to 64 messages before calling sock->sendmsg.
>
> Currently, when there are no more messages on the ring to dequeue,
> handle_tx_copy re-enables kicks on the ring *before* firing off the
> batch sendmsg. However, sock->sendmsg incurs a non-zero delay,
> especially if it needs to wake up a thread (e.g., another vhost worker).
>
> If the guest submits additional messages immediately after the last ring
> check and disablement, it triggers an EPT_MISCONFIG vmexit to attempt to
> kick the vhost worker. This may happen while the worker is still
> processing the sendmsg, leading to wasteful exit(s).
>
> This is particularly problematic for single-threaded guest submission
> threads, as they must exit, wait for the exit to be processed
> (potentially involving a TTWU), and then resume.
>
> In scenarios like a constant stream of UDP messages, this results in a
> sawtooth pattern where the submitter frequently vmexits, and the
> vhost-net worker alternates between sleeping and waking.
>
> A common solution is to configure vhost-net busy polling via userspace
> (e.g., qemu poll-us). However, treating the sendmsg as the "busy"
> period by keeping kicks disabled during the final sendmsg and
> performing one additional ring check afterward provides a significant
> performance improvement without any excess busy poll cycles.
>
> If messages are found in the ring after the final sendmsg, requeue the
> TX handler. This ensures fairness for the RX handler and allows
> vhost_run_work_list to cond_resched() as needed.
>
> Test Case
> TX VM: taskset -c 2 iperf3 -c rx-ip-here -t 60 -p 5200 -b 0 -u -i 5
> RX VM: taskset -c 2 iperf3 -s -p 5200 -D
> 6.12.0, each worker backed by tun interface with IFF_NAPI setup.
> Note: TCP side is largely unchanged as that was copy bound
>
> 6.12.0 unpatched
> EPT_MISCONFIG/second: 5411
> Datagrams/second: ~382k
> Interval Transfer Bitrate Lost/Total Datagrams
> 0.00-30.00 sec 15.5 GBytes 4.43 Gbits/sec 0/11481630 (0%) sender
>
> 6.12.0 patched
> EPT_MISCONFIG/second: 58 (~93x reduction)
> Datagrams/second: ~650k (~1.7x increase)
> Interval Transfer Bitrate Lost/Total Datagrams
> 0.00-30.00 sec 26.4 GBytes 7.55 Gbits/sec 0/19554720 (0%) sender
>
> Acked-by: Jason Wang <jasowang@...hat.com>
> Signed-off-by: Jon Kohler <jon@...anix.com>
Acked-by: Michael S. Tsirkin <mst@...hat.com>
> ---
> v2->v3: Address MST's comments regarding busyloop_intr
> https://patchwork.kernel.org/project/netdevbpf/patch/20250420010518.2842335-1-jon@nutanix.com/
> v1->v2: Move from net to net-next (no changes)
> https://patchwork.kernel.org/project/netdevbpf/patch/20250401043230.790419-1-jon@nutanix.com/
> ---
> drivers/vhost/net.c | 30 +++++++++++++++++++++---------
> 1 file changed, 21 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index b9b9e9d40951..7cbfc7d718b3 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -755,10 +755,10 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
> int err;
> int sent_pkts = 0;
> bool sock_can_batch = (sock->sk->sk_sndbuf == INT_MAX);
> + bool busyloop_intr;
>
> do {
> - bool busyloop_intr = false;
> -
> + busyloop_intr = false;
> if (nvq->done_idx == VHOST_NET_BATCH)
> vhost_tx_batch(net, nvq, sock, &msg);
>
> @@ -769,13 +769,10 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
> break;
> /* Nothing new? Wait for eventfd to tell us they refilled. */
> if (head == vq->num) {
> - if (unlikely(busyloop_intr)) {
> - vhost_poll_queue(&vq->poll);
> - } else if (unlikely(vhost_enable_notify(&net->dev,
> - vq))) {
> - vhost_disable_notify(&net->dev, vq);
> - continue;
> - }
> + /* Kicks are disabled at this point, break loop and
> + * process any remaining batched packets. Queue will
> + * be re-enabled afterwards.
> + */
> break;
> }
>
> @@ -825,7 +822,22 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
> ++nvq->done_idx;
> } while (likely(!vhost_exceeds_weight(vq, ++sent_pkts, total_len)));
>
> + /* Kicks are still disabled, dispatch any remaining batched msgs. */
> vhost_tx_batch(net, nvq, sock, &msg);
> +
> + if (unlikely(busyloop_intr))
> + /* If interrupted while doing busy polling, requeue the
> + * handler to be fair handle_rx as well as other tasks
> + * waiting on cpu.
> + */
> + vhost_poll_queue(&vq->poll);
> + else
> + /* All of our work has been completed; however, before
> + * leaving the TX handler, do one last check for work,
> + * and requeue handler if necessary. If there is no work,
> + * queue will be reenabled.
> + */
> + vhost_net_busy_poll_try_queue(net, vq);
> }
>
> static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock)
> --
> 2.43.0
Powered by blists - more mailing lists