[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LNX.2.00.1510071546080.23840@localhost.lm.intel.com>
Date: Wed, 7 Oct 2015 16:04:07 +0000 (UTC)
From: Keith Busch <keith.busch@...el.com>
To: Keith Busch <keith.busch@...el.com>
cc: Bjorn Helgaas <helgaas@...nel.org>,
LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
linux-pci@...r.kernel.org, Jiang Liu <jiang.liu@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Dan Williams <dan.j.williams@...el.com>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Bryan Veal <bryan.e.veal@...el.com>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [RFC PATCHv2] x86/pci: Initial commit for new VMD device
driver
On Tue, 6 Oct 2015, Keith Busch wrote:
> On Tue, 6 Oct 2015, Bjorn Helgaas wrote:
>>> + resource_list_for_each_entry(entry, &resources) {
>>> + struct resource *source, *resource = entry->res;
>>> +
>>> + if (!i) {
>>> + resource->start = 0;
>>> + resource->end = (resource_size(
>>> + &vmd->dev->resource[0]) >> 20) - 1;
>>> + resource->flags = IORESOURCE_BUS |
>>> IORESOURCE_PCI_FIXED;
>>
>> I thought BAR0 was CFGBAR. I missed the connection to a bus number
>> aperture.
>
> Right, BAR0 is the CFGBAR and is the device's aperture to access its
> domain's config space.
It's a new day, I'll try a new explanation on what this is about. The size
of the CFGBAR determines how many bus numbers can be reached through the
device's config space aperture. We are not setting the bus resource to
BAR0; just determining the bus resource ending based on BAR0's size. We
expect the bar to be 256M to access config space for 256 buses:
8 functions * 32 devices * 256 buses * 4k config space per function = 256M
If the BAR wasn't provided 256M for any reason, we reduce the number of
bus resources this domain can provide.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists