[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170502183728.GA7792@kroah.com>
Date: Tue, 2 May 2017 11:37:28 -0700
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: weili@...eaurora.org
Cc: linux-kernel@...r.kernel.org, vatsa@...eaurora.org,
sonic@...eaurora.org
Subject: Re: [PATCH] driver-core: remove lock for platform devices during
probe
On Tue, May 02, 2017 at 10:18:25AM +0800, weili@...eaurora.org wrote:
> Hi Greg K-H,
>
> On 2017-04-25 19:36, Greg Kroah-Hartman wrote:
> > On Tue, Apr 25, 2017 at 04:43:33PM +0800, weili@...eaurora.org wrote:
> > > Hi Greg K-H,
> > >
> > > On 2017-04-24 16:46, Greg Kroah-Hartman wrote:
> > >
> > > > And does it really reduce boot time? What are the numbers?
> > > Yes, it really reduce boot time. After making most time-consuming
> > > platform
> > > driver using async probe
> > > and also applying this patch, we see the driver run in parallel with
> > > others and saving 140ms.
> >
> > And why wasn't that information in the initial commit message?
> >
> > And how much of a % is 140ms? Why is a single driver taking that long
> > to initialize itself?
> The kernel took 1.72 seconds to boot to run the first init program. 140ms is
> 8% improvement.
> 140ms is long for a single driver initialize. We are in discussion with the
> driver owner
> about optimization.
Yes, please fix that.
> > > > What does the boot graph look like when you run with and without this
> > > > patch?
> > > Without the patch, the boot graph is like this:
> > > CPU0: platform driver1 probe -> lock parent -> do probe staff ->
> > > unlock
> > > parent -> probe finish
> > > CPU1: platform driver2 probe -> wait for lock on
> > > parent
> > > -> lock parent -> do probe -> unlock parent -> probe finish
> > >
> > > With the patch, the boot graph is like this:
> > > CPU0: platform driver1 probe -> do probe staff -> probe finish
> > > CPU1: platform drvier2 probe -> do probe staff -> probe finish
> >
> > No, I mean the boot graph in pretty .svg format that the kernel can
> > output, with times and processes and everything. Look in the tools
> > directory for more information, it will give you the exact timing for
> > your change before and after and show you exactly where you are taking
> > long periods of time.
> >
> > You did use that, or something else to measure this somehow, right?
> >
> The boot graph is in the attachment. The function msm_sharedmem_init took
> long time because it is blocked by another async probe driver. After
> applying the patch, msm_sharedmem_init is no longer blocked.
Why isn't the boot graph showing any parallel tasks? I thought it
would.
> > > > Why is the platform bus so "special" to warrant this? Should we perhaps
> > > > make this
> > > > an option for any bus to enable/disable?
> > > The lock on parent was first introduced by USB guys in following
> > > commit
> > > https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit/drivers/base/dd.c?id=bf74ad5bc41727d5f2f1c6bedb2c1fac394de731
> > > This may be useful for real bus devices such as USB and they think
> > > overhead of acquiring a lock is not large.
> > > But since platfrom bus is virtual, the lock is not necessary.
> > > Removing it
> > > for platform devices will make
> > > driver running in parallel and benefit boot time.
> >
> > I know all about USB here :)
> >
> > You did not answer my questions :(
> >
> Do you suggest that we add some varible like "async_probe" in struct
> bus_type and then check the varible during probe to decide whether to
> lock the parent?
You don't want to do this for all platform devices, things will break,
we found this out a long time ago when we tried to make everything init
in parallel. So you are going to have to do a lot of testing on lots of
platforms...
thanks,
greg k-h
Powered by blists - more mailing lists