[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201107051721.02079.arnd@arndb.de>
Date: Tue, 5 Jul 2011 17:21:01 +0200
From: Arnd Bergmann <arnd@...db.de>
To: Greg KH <gregkh@...e.de>
Cc: Grant Likely <grant.likely@...retlab.ca>,
Mark Brown <broonie@...nsource.wolfsonmicro.com>,
Kay Sievers <kay.sievers@...y.org>,
linux-kernel@...r.kernel.org, "Rafael J. Wysocki" <rjw@...k.pl>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH] drivercore: Add driver probe deferral mechanism
On Tuesday 05 July 2011, Greg KH wrote:
> So the driver core is just going to sit and spin and continue to try to
> probe drivers for as long as it gets that error value returned? What is
> going to ever cause that loop to terminate? It seems a bit hacky to
> just keep looping over and over and hoping that sometime everything will
> all settle down so that we can go to sleep again.
Well, it only needs to try as long as there are still new devices
succeeding to get probed. The order that I think this should happen
in is:
* go through all initcalls, record any devices that are not yet ready
* retry all devices on the list as long as at least one of them has
succeeded.
* when a new device gets matched from a module load, do that loop again
If I read the patch correctly, the workqueue would be scheduled
every time a new device gets added, which retries the devices
more often than necessary and can have significant boot time
impact, and it also introduces more asynchronicity that may expose
new bugs.
Maybe we can have a late_initcall that enables the automatic retry
and probes everything once:
static bool deferred_probe;
static int __init deferred_probe_start(void)
{
deferred_probe = true;
mutex_lock(&deferred_probe_mutex);
if (!list_empty(&deferred_probe_list))
schedule_work(&deferred_probe_work);
mutex_unlock(&deferred_probe_mutex);
flush_work_sync(&deferred_probe_work);
}
late_initcall(retry_devices);
Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists