[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eb45485d9107440a667e598da99ad949320b77b1.camel@intel.com>
Date: Thu, 27 Aug 2020 16:45:53 +0000
From: "Derrick, Jonathan" <jonathan.derrick@...el.com>
To: "hch@...radead.org" <hch@...radead.org>
CC: "wangxiongfeng2@...wei.com" <wangxiongfeng2@...wei.com>,
"kw@...ux.com" <kw@...ux.com>,
"hkallweit1@...il.com" <hkallweit1@...il.com>,
"kai.heng.feng@...onical.com" <kai.heng.feng@...onical.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mika.westerberg@...ux.intel.com" <mika.westerberg@...ux.intel.com>,
"Mario.Limonciello@...l.com" <Mario.Limonciello@...l.com>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>,
"Huffman, Amber" <amber.huffman@...el.com>,
"Wysocki, Rafael J" <rafael.j.wysocki@...el.com>
Subject: Re: [PATCH] PCI/ASPM: Enable ASPM for links under VMD domain
On Thu, 2020-08-27 at 17:23 +0100, hch@...radead.org wrote:
> On Thu, Aug 27, 2020 at 04:13:44PM +0000, Derrick, Jonathan wrote:
> > On Thu, 2020-08-27 at 06:34 +0000, hch@...radead.org wrote:
> > > On Wed, Aug 26, 2020 at 09:43:27PM +0000, Derrick, Jonathan wrote:
> > > > Feel free to review my set to disable the MSI remapping which will
> > > > make
> > > > it perform as well as direct-attached:
> > > >
> > > > https://patchwork.kernel.org/project/linux-pci/list/?series=325681
> > >
> > > So that then we have to deal with your schemes to make individual
> > > device direct assignment work in a convoluted way?
> >
> > That's not the intent of that patchset -at all-. It was to address the
> > performance bottlenecks with VMD that you constantly complain about.
>
> I know. But once we fix that bottleneck we fix the next issue,
> then to tackle the next. While at the same time VMD brings zero
> actual benefits.
>
Just a few benefits and there are other users with unique use cases:
1. Passthrough of the endpoint to OSes which don't natively support
hotplug can enable hotplug for that OS using the guest VMD driver
2. Some hypervisors have a limit on the number of devices that can be
passed through. VMD endpoint is a single device that expands to many.
3. Expansion of possible bus numbers beyond 256 by using other
segments.
4. Custom RAID LED patterns driven by ledctl
I'm not trying to market this. Just pointing out that this isn't
"bringing zero actual benefits" to many users.
> > > Please just give us
> > > a disable nob for VMD, which solves _all_ these problems without
> > > adding
> > > any.
> >
> > I don't see the purpose of this line of discussion. VMD has been in the
> > kernel for 5 years. We are constantly working on better support.
>
> Please just work with the platform people to allow the host to disable
> VMD. That is the only really useful value add here.
Cheers
Powered by blists - more mailing lists