[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221206062604.GB6595@lst.de>
Date: Tue, 6 Dec 2022 07:26:04 +0100
From: Christoph Hellwig <hch@....de>
To: Lei Rao <lei.rao@...el.com>
Cc: kbusch@...nel.org, axboe@...com, kch@...dia.com, hch@....de,
sagi@...mberg.me, alex.williamson@...hat.com, cohuck@...hat.com,
jgg@...pe.ca, yishaih@...dia.com,
shameerali.kolothum.thodi@...wei.com, kevin.tian@...el.com,
mjrosato@...ux.ibm.com, linux-kernel@...r.kernel.org,
linux-nvme@...ts.infradead.org, kvm@...r.kernel.org,
eddie.dong@...el.com, yadong.li@...el.com, yi.l.liu@...el.com,
Konrad.wilk@...cle.com, stephen@...eticom.com, hang.yuan@...el.com
Subject: Re: [RFC PATCH 5/5] nvme-vfio: Add a document for the NVMe device
On Tue, Dec 06, 2022 at 01:58:16PM +0800, Lei Rao wrote:
> The documentation describes the details of the NVMe hardware
> extension to support VFIO live migration.
This is not a NVMe hardware extension, this is some really strange
and half-assed intel-specific extension to nvme, which like any other
vendors specific non-standard extensions to nvme we refused to support
in Linux.
There is a TPAR for live migration building blocks under discussion in
the NVMe technical working group. It will still require mediatation
of access to the admin queue to deal with the huge amout of state nvme
has that needs to be migrated (and which doesn't seem to be covered at
all here). In Linux the equivalent would be to implement a mdev driver
that allows passing through the I/O qeues to a guest, but it might
be a better idea to handle the device model emulation entirely in
Qemu (or other userspace device models) and just find a way to expose
enough of the I/O queues to userspace.
The current TPAR seems to be very complicated for that, as in many
cases we'd only need a way to tіe certain namespaces to certain I/O
queues and not waste a lot of resources on the rest of the controller.
Powered by blists - more mailing lists