[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1365618018.4235.35.camel@jlt4.sipsolutions.net>
Date: Wed, 10 Apr 2013 20:20:18 +0200
From: Johannes Berg <johannes@...solutions.net>
To: "Luis R. Rodriguez" <mcgrof@...not-panic.com>
Cc: "backports@...r.kernel.org" <backports@...r.kernel.org>,
Dan Williams <dcbw@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 06/18] compat: backport ASYNC_DOMAIN_EXCLUSIVE()
On Wed, 2013-04-10 at 10:26 -0700, Luis R. Rodriguez wrote:
> > I guess I'd have to review the async API,
>
> Yeap, reviewing the commit noted would help too.
Yeah ... :)
> > What's the use of just this when you don't have things like
> > async_schedule_domain() and async_synchronize_full_domain(), regulator
> > stuff wouldn't compile I think?
>
> You mean is not having the full asynch that deals with all registered
> domains likely to have an issue on the useres of
> async_synchronize_full_domain() ? Lets better ask Dan.
I don't know. However it seems that in order to have an ASYNC_DOMAIN()
or ASYNC_DOMAIN_EXCLUSIVE() you always need to *do* something with it,
so for that you'd also need the functions async_schedule_domain() and
async_synchronize_full_domain() or similar, at least, no?
The point here seems to be making boot faster by starting a bunch of
async probing inside a domain, and then you wait for the entire domain,
so everything that's in that domain can be done in parallel.
Say for example you have 20 SCSI drives. If you look at them serially
then you'd waste much time waiting for the drives. The point here
appears to be that you create a domain (using this macro), then add all
the drives to the domain and then wait for the domain to finish.
However, it seems entirely pointless to backport just a small part of
the API?
johannes
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists