lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LNX.2.00.1510062331220.23840@localhost.lm.intel.com>
Date:	Wed, 7 Oct 2015 00:21:02 +0000 (UTC)
From:	Keith Busch <keith.busch@...el.com>
To:	Bjorn Helgaas <helgaas@...nel.org>
cc:	Keith Busch <keith.busch@...el.com>,
	LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
	linux-pci@...r.kernel.org, Jiang Liu <jiang.liu@...ux.intel.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Dan Williams <dan.j.williams@...el.com>,
	Bjorn Helgaas <bhelgaas@...gle.com>,
	Bryan Veal <bryan.e.veal@...el.com>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [RFC PATCHv2] x86/pci: Initial commit for new VMD device
 driver

Hi Bjorn,

Thanks for the feedback. Much of the issues you mentioned look pretty
straight forward to resolve, and will fix of for the next revision.

I have some immediate follow up comments to two issues you brought up:

On Tue, 6 Oct 2015, Bjorn Helgaas wrote:
>> +static int vmd_find_free_domain(void)
>> +{
>> +	int domain = -1;
>> +	struct pci_bus *bus = NULL;
>> +
>> +	while ((bus = pci_find_next_bus(bus)) != NULL)
>> +		domain = max_t(int, domain, pci_domain_nr(bus));
>> +	if (domain < 0)
>> +		return -ENODEV;
>> +	return domain + 1;
>> +}
>
> The PCI core should own domain number allocation.  We have a little
> bit of generic domain support, e.g., pci_get_new_domain_nr().  On x86,
> I think that is compiled-in, but currently not used -- currently x86
> only uses _SEG from ACPI.  How would you handle collisions between a
> domain number assigned here (or via pci_get_new_domain_nr()) and a
> hot-added host bridge where _SEG uses the same domain?

Thank you for bringing this up as I hadn't thought much about it and may
have misunderstood the meaning of _SEG. AIUI, it is used to identify a
logical grouping. The OS does not need to use the same identifier for
the domain, so we there's no need to collide if we can assign the domain
of a a new _SEG to the next available domain_nr.

On pci_get_new_domain_nr(), can we make this use something like an
ida instead of an atomic? I think we'd like to put domains back into
the available pool if someone unbinds it. I imagine someone will test
unbinding/rebinding these devices in a loop for a while, and will file
a bug after the atomic increments to 64k. :)

>> +	resource_list_for_each_entry(entry, &resources) {
>> +		struct resource *source, *resource = entry->res;
>> +
>> +		if (!i) {
>> +			resource->start = 0;
>> +			resource->end = (resource_size(
>> +					&vmd->dev->resource[0]) >> 20) - 1;
>> +			resource->flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED;
>
> I thought BAR0 was CFGBAR.  I missed the connection to a bus number
> aperture.

Right, BAR0 is the CFGBAR and is the device's aperture to access its
domain's config space. Based on your comment, I assume that's not
a bus resource, though it seems to work with the posted code. :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ