[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240617-stage-vdpa-vq-precreate-v1-0-8c0483f0ca2a@nvidia.com>
Date: Mon, 17 Jun 2024 18:07:34 +0300
From: Dragos Tatulea <dtatulea@...dia.com>
To: "Michael S. Tsirkin" <mst@...hat.com>, Jason Wang <jasowang@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>, Eugenio PĂ©rez
<eperezma@...hat.com>, Saeed Mahameed <saeedm@...dia.com>, Leon Romanovsky
<leon@...nel.org>, Tariq Toukan <tariqt@...dia.com>, Si-Wei Liu
<si-wei.liu@...cle.com>
CC: <virtualization@...ts.linux.dev>, <linux-kernel@...r.kernel.org>,
<linux-rdma@...r.kernel.org>, <netdev@...r.kernel.org>, Dragos Tatulea
<dtatulea@...dia.com>, Cosmin Ratiu <cratiu@...dia.com>
Subject: [PATCH vhost 00/23] vdpa/mlx5: Pre-create HW VQs to reduce LM
downtime
According to the measurements for vDPA Live Migration downtime [0], one
large source of downtime is the creation of hardware VQs and their
associated resources on the devices on the destination VM.
Previous series ([1], [2]) addressed the source part of the Live
Migration downtime. This series addresses the destination part: instead
of creating hardware VQs and their dependent resources when the device
goes into the DRIVER_OK state (which is during downtime), create "blank"
VQs at device creation time and only modify them to the received
configuration before starting the VQs (DRIVER_OK state).
The caveat here is that mlx5_vdpa VQs don't support modifying the VQ
size. VQs will be created with a convenient default size and when this
size is changed, they will be recreated.
The beginning of the series consists of refactorings and preparation.
After that, some preparations are made:
- Allow creation of "blank" VQs by not configuring them during
create_virtqueue() if there are no modified fields.
- The VQ Init to Ready state transition is consolidated into the
resume_vq().
- Add error handling to suspend/resume code paths.
Then VQs are created at device creation time.
Finally, the special cases that need full VQ resource recreation are
handled.
On a 64 CPU, 256 GB VM with 1 vDPA device of 16 VQps, the full VQ
resource creation + resume time was ~370ms. Now it's down to 60 ms
(only VQ config and resume). The measurements were done on a ConnectX6DX
based vDPA device.
[0] https://lore.kernel.org/qemu-devel/1701970793-6865-1-git-send-email-si-wei.liu@oracle.com/
[1] https://lore.kernel.org/lkml/20231018171456.1624030-2-dtatulea@nvidia.com
[2] https://lore.kernel.org/lkml/20231219180858.120898-1-dtatulea@nvidia.com
---
Dragos Tatulea (23):
vdpa/mlx5: Clarify meaning thorough function rename
vdpa/mlx5: Make setup/teardown_vq_resources() symmetrical
vdpa/mlx5: Drop redundant code
vdpa/mlx5: Drop redundant check in teardown_virtqueues()
vdpa/mlx5: Iterate over active VQs during suspend/resume
vdpa/mlx5: Remove duplicate suspend code
vdpa/mlx5: Initialize and reset device with one queue pair
vdpa/mlx5: Clear and reinitialize software VQ data on reset
vdpa/mlx5: Add support for modifying the virtio_version VQ field
vdpa/mlx5: Add support for modifying the VQ features field
vdpa/mlx5: Set an initial size on the VQ
vdpa/mlx5: Start off rqt_size with max VQPs
vdpa/mlx5: Set mkey modified flags on all VQs
vdpa/mlx5: Allow creation of blank VQs
vdpa/mlx5: Accept Init -> Ready VQ transition in resume_vq()
vdpa/mlx5: Add error code for suspend/resume VQ
vdpa/mlx5: Consolidate all VQ modify to Ready to use resume_vq()
vdpa/mlx5: Forward error in suspend/resume device
vdpa/mlx5: Use suspend/resume during VQP change
vdpa/mlx5: Pre-create hardware VQs at vdpa .dev_add time
vdpa/mlx5: Re-create HW VQs under certain conditions
vdpa/mlx5: Don't reset VQs more than necessary
vdpa/mlx5: Don't enable non-active VQs in .set_vq_ready()
drivers/vdpa/mlx5/net/mlx5_vnet.c | 422 +++++++++++++++++++++++++------------
drivers/vdpa/mlx5/net/mlx5_vnet.h | 2 +
include/linux/mlx5/mlx5_ifc_vdpa.h | 2 +
3 files changed, 291 insertions(+), 135 deletions(-)
---
base-commit: c8fae27d141a32a1624d0d0d5419d94252824498
change-id: 20240617-stage-vdpa-vq-precreate-76df151bed08
Best regards,
--
Dragos Tatulea <dtatulea@...dia.com>
Powered by blists - more mailing lists