[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160119220538.GA23731@intel.com>
Date: Tue, 19 Jan 2016 14:05:38 -0800
From: "Veal, Bryan E." <bryan.e.veal@...el.com>
To: Keith Busch <keith.busch@...el.com>
Cc: Christoph Hellwig <hch@...radead.org>,
Bjorn Helgaas <helgaas@...nel.org>,
"Derrick, Jonathan" <jonathan.derrick@...el.com>,
LKML <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Bjorn Helgaas <bhelgaas@...gle.com>,
"Williams, Dan J" <dan.j.williams@...el.com>
Subject: Re: [PATCHv8 0/5] Driver for new "VMD" device
On Tue, Jan 19, 2016 at 04:36:36PM +0000, Keith Busch wrote:
> On Tue, Jan 19, 2016 at 08:02:20AM -0800, Christoph Hellwig wrote:
> > As this seems to require special drivers to bind to it, and Intel
> > people refuse to even publicly tell what the code does I'd like
> > to NAK this code until we get an explanation and use cases for it.
>
> We haven't opened the h/w specification, but we've been pretty open with
> what it provides, how the code works, and our intended use case. The
> device provides additional pci domains for people who need more than
> the 256 busses a single domain provides.
>
> What information may I provide to satisfy your use case concerns? Are
> you wanting to know what devices we have in mind that require additional
> domains?
VMD is simply a convenient way to create a new PCIe host bridge that
happens to sit on the existing PCIe root bus. It changes how I/O is
routed (i.e. BDF translation), but not its contents. We've actually gone
through some effort in the code *avoid* special drivers by implementing
the existing host bridge abstractions. The cases where existing drivers
wouldn't work are due to limitations, not arbitrary filters. (For
example, it doesn't know how to route legacy IO ports or INTx.)
Powered by blists - more mailing lists