[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070509094647.GB13245@kroah.com>
Date: Wed, 9 May 2007 02:46:47 -0700
From: Greg KH <greg@...ah.com>
To: David Miller <davem@...emloft.net>
Cc: torvalds@...ux-foundation.org, bunk@...sta.de,
cornelia.huck@...ibm.com, linux-kernel@...r.kernel.org
Subject: Re: Please revert 5adc55da4a7758021bcc374904b0f8b076508a11
(PCI_MULTITHREAD_PROBE)
On Tue, May 08, 2007 at 02:15:54PM -0700, David Miller wrote:
> From: Linus Torvalds <torvalds@...ux-foundation.org>
> Date: Tue, 8 May 2007 08:27:34 -0700 (PDT)
>
> > Threading at the bus level just inevitably means things like random
> > numbers for devices depending on some timing/scheduling issue. That's
> > nasty.
>
> I hadn't considered this issue, so ignore the other reply I made to
> this thread. Although as an aside I'm starting to become of an
> opinion that device numbering doesn't matter. Every device should
> have a unique ID of sorts, or a unique physical location, and that
> should factor into the thing users use to refer to the object.
We pretty much already do this today.
For block devices, as an example, look at /dev/disk/ which udev creates
so that you can handle block devices being discovered in any order
possible.
> Anyways, it would be nice, however, to really deal with the case like
> when the IDE layer is waiting for a probe to port X to timeout,
> meanwhile we could be initializing the networking card.
>
> Another bad case is, as you mentioned, SCSI bus resets and SAS/FC
> fabric scans. Those take several seconds if not longer and it's
> really stupid to not be able to do other things during that time.
Yes, because of that, I think this kind of multi-probe stuff should be
done in the IDE/SATA/SCSI bus code, not in the PCI code, as PCI
"normally" does not have any speed issues.
thanks,
greg k-h
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists