[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.01.0906161623240.16802@localhost.localdomain>
Date: Tue, 16 Jun 2009 16:32:18 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Andrew Patterson <andrew.patterson@...com>
cc: linux-pci@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
jbarnes@...tuousgeek.org,
Ivan Kokshaysky <ink@...assic.park.msu.ru>
Subject: Re: [PATCH 0/1] Recurse when searching for empty slots in resources
trees
On Tue, 16 Jun 2009, Linus Torvalds wrote:
>
> So your patch may fix a bug, but I'm pretty sure I've seen a patch from
> Ivan that should _also_ fix it, and that I would expect to do it not by
> just tweaking a fundamentally ambiguous case.
Hmm. For the life of me, I can't seem to find this patch. Maybe it wasn't
Ivan who wrote it after all. Or maybe my google-fu is weak. Or maybe I'm
just delusional, and the patch never existed.
However, regardless of that, I'm now confused about your patch too. So you
have this layout:
-+-[0000:c2]---00.0-[0000:c3-fb]--+-00.0 QLogic Corp. 8Gb Fibre Channel HBA
| \-00.1 QLogic Corp. 8Gb Fibre Channel HBA
where bus c3 is inside bus c2. Fine. And we clearly get that wrong in the
resource tree:
f0000000-fdffffff : PCI Bus 0000:c3
f0000000-fdffffff : PCI Bus 0000:c2
f0000000-f00fffff : 0000:c3:00.1
since that one end up having the c3 bus resource _outside_ of the c2 one.
That is, I think, the real bug. However, your patch doesn't try to fix
that bad nesting, but instead seems to try to work around it some odd way.
But looking at things, I don't even see how this happens in the first
place. Afaik, we use pci_assign_resource() to assign bus resources, and
that one _should_ nest properly. So now I'm really confused about how you
got that /proc/iomem in the first place.
Is this perhaps some hotplug-pci specific bug? How did that bus resource
for "PCI Bus 0000:c3" get allocated?
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists