[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210325183157.GA32286@redsun51.ssa.fujisawa.hgst.com>
Date: Fri, 26 Mar 2021 03:31:57 +0900
From: Keith Busch <kbusch@...nel.org>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Bjorn Helgaas <helgaas@...nel.org>,
Alexander Duyck <alexander.duyck@...il.com>,
Leon Romanovsky <leon@...nel.org>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leonro@...dia.com>,
Jakub Kicinski <kuba@...nel.org>,
linux-pci <linux-pci@...r.kernel.org>,
linux-rdma@...r.kernel.org, Netdev <netdev@...r.kernel.org>,
Don Dutile <ddutile@...hat.com>,
Alex Williamson <alex.williamson@...hat.com>,
"David S . Miller" <davem@...emloft.net>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH mlx5-next v7 0/4] Dynamically assign MSI-X vectors count
On Thu, Mar 25, 2021 at 02:36:46PM -0300, Jason Gunthorpe wrote:
> On Thu, Mar 25, 2021 at 12:21:44PM -0500, Bjorn Helgaas wrote:
>
> > NVMe and mlx5 have basically identical functionality in this respect.
> > Other devices and vendors will likely implement similar functionality.
> > It would be ideal if we had an interface generic enough to support
> > them all.
> >
> > Is the mlx5 interface proposed here sufficient to support the NVMe
> > model? I think it's close, but not quite, because the the NVMe
> > "offline" state isn't explicitly visible in the mlx5 model.
>
> I thought Keith basically said "offline" wasn't really useful as a
> distinct idea. It is an artifact of nvme being a standards body
> divorced from the operating system.
I think that was someone else who said that.
FWIW, the nvme "offline" state just means a driver can't use the nvme
capabilities of the device. You can bind a driver to it if you want, but
no IO will be possible, so it's fine if you bind your VF to something
like vfio prior to starting a VM, or just not have a driver bound to
anything during the intial resource assignment.
> In linux offline and no driver attached are the same thing, you'd
> never want an API to make a nvme device with a driver attached offline
> because it would break the driver.
>
> So I think it is good as is (well one of the 8 versions anyhow).
>
> Keith didn't go into detail why the queue allocations in nvme were any
> different than the queue allocations in mlx5.
The NVMe IO queue resources are assignable just like the MSIx vectors.
But they're not always assigned 1:1. For example:
NVMe has an admin queue that always requires an interrupt vector. Does
the VM driver want this vector to share with the IO queues, or do we
want a +1 vector for that queue?
Maybe the VM is going to use a user space polling driver, so now you
don't even need MSIx vectors on the function assigned to that VM. You
just need to assign the IO queue resouces, and reserve the MSIx
resources for another function.
The Linux nvme driver allows a mix of poll + interrupt queues, so the
user may want to allocate more IO queues than interrupts.
A kernel interface for assigning interrupt vectors gets us only halfway
to configuring the assignable resources.
> I expect they can probably work the same where the # of interrupts is
> an upper bound on the # of CPUs that can get queues and the device,
> once instantiated, could be configured for the number of queues to
> actually operate, if it wants.
Powered by blists - more mailing lists