[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BL0PR11MB3042F27C99080E4B51318B338A1B9@BL0PR11MB3042.namprd11.prod.outlook.com>
Date: Tue, 6 Dec 2022 18:00:27 +0000
From: "Dong, Eddie" <eddie.dong@...el.com>
To: Christoph Hellwig <hch@....de>, Jason Gunthorpe <jgg@...pe.ca>
CC: "Rao, Lei" <lei.rao@...el.com>,
"kbusch@...nel.org" <kbusch@...nel.org>,
"axboe@...com" <axboe@...com>, "kch@...dia.com" <kch@...dia.com>,
"sagi@...mberg.me" <sagi@...mberg.me>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
"cohuck@...hat.com" <cohuck@...hat.com>,
"yishaih@...dia.com" <yishaih@...dia.com>,
"shameerali.kolothum.thodi@...wei.com"
<shameerali.kolothum.thodi@...wei.com>,
"Tian, Kevin" <kevin.tian@...el.com>,
"mjrosato@...ux.ibm.com" <mjrosato@...ux.ibm.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Li, Yadong" <yadong.li@...el.com>,
"Liu, Yi L" <yi.l.liu@...el.com>,
"Wilk, Konrad" <konrad.wilk@...cle.com>,
"stephen@...eticom.com" <stephen@...eticom.com>,
"Yuan, Hang" <hang.yuan@...el.com>
Subject: RE: [RFC PATCH 5/5] nvme-vfio: Add a document for the NVMe device
> -----Original Message-----
> From: Christoph Hellwig <hch@....de>
> Sent: Tuesday, December 6, 2022 7:36 AM
> To: Jason Gunthorpe <jgg@...pe.ca>
> Cc: Christoph Hellwig <hch@....de>; Rao, Lei <Lei.Rao@...el.com>;
> kbusch@...nel.org; axboe@...com; kch@...dia.com; sagi@...mberg.me;
> alex.williamson@...hat.com; cohuck@...hat.com; yishaih@...dia.com;
> shameerali.kolothum.thodi@...wei.com; Tian, Kevin <kevin.tian@...el.com>;
> mjrosato@...ux.ibm.com; linux-kernel@...r.kernel.org; linux-
> nvme@...ts.infradead.org; kvm@...r.kernel.org; Dong, Eddie
> <eddie.dong@...el.com>; Li, Yadong <yadong.li@...el.com>; Liu, Yi L
> <yi.l.liu@...el.com>; Wilk, Konrad <konrad.wilk@...cle.com>;
> stephen@...eticom.com; Yuan, Hang <hang.yuan@...el.com>
> Subject: Re: [RFC PATCH 5/5] nvme-vfio: Add a document for the NVMe device
>
> On Tue, Dec 06, 2022 at 11:28:12AM -0400, Jason Gunthorpe wrote:
> > I'm interested as well, my mental model goes as far as mlx5 and
> > hisillicon, so if nvme prevents the VFs from being contained units, it
> > is a really big deviation from VFIO's migration design..
>
> In NVMe the controller (which maps to a PCIe physical or virtual
> function) is unfortunately not very self contained. A lot of state is subsystem-
> wide, where the subsystem is, roughly speaking, the container for all
> controllers that shared storage. That is the right thing to do for say dual
> ported SSDs that are used for clustering or multi-pathing, for tentant isolation
> is it about as wrong as it gets.
NVMe spec is general, but the implementation details (such as internal state) may
be vendor specific. If the migration happens between 2 identical NVMe devices
(from same vendor/device w/ same firmware version), migration of
subsystem-wide state can be naturally covered, right?
>
> There is nothing in the NVMe spec that prohibits your from implementing
> multiple subsystems for multiple functions of a PCIe device, but if you do that
> there is absolutely no support in the spec to manage shared resources or any
> other interaction between them.
In IPU/DPU area, it seems multiple VFs with SR-IOV is widely adopted.
In VFs, the usage of shared resource can be viewed as implementation specific,
and load/save state of a VF can rely on the hardware/firmware itself.
Migration of NVMe devices crossing vendor/device is another story: it may
be useful, but brings additional challenges.
Powered by blists - more mailing lists