[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <128e7cfa7d551607e195c4e087572594@codeaurora.org>
Date: Tue, 02 May 2017 10:18:25 +0800
From: weili@...eaurora.org
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: linux-kernel@...r.kernel.org, vatsa@...eaurora.org,
sonic@...eaurora.org
Subject: Re: [PATCH] driver-core: remove lock for platform devices during
probe
Hi Greg K-H,
On 2017-04-25 19:36, Greg Kroah-Hartman wrote:
> On Tue, Apr 25, 2017 at 04:43:33PM +0800, weili@...eaurora.org wrote:
>> Hi Greg K-H,
>>
>> On 2017-04-24 16:46, Greg Kroah-Hartman wrote:
>>
>> > And does it really reduce boot time? What are the numbers?
>> Yes, it really reduce boot time. After making most time-consuming
>> platform
>> driver using async probe
>> and also applying this patch, we see the driver run in parallel with
>> others and saving 140ms.
>
> And why wasn't that information in the initial commit message?
>
> And how much of a % is 140ms? Why is a single driver taking that long
> to initialize itself?
The kernel took 1.72 seconds to boot to run the first init program.
140ms is 8% improvement.
140ms is long for a single driver initialize. We are in discussion with
the driver owner
about optimization.
>> > What does the boot graph look like when you run with and without this
>> > patch?
>> Without the patch, the boot graph is like this:
>> CPU0: platform driver1 probe -> lock parent -> do probe staff ->
>> unlock
>> parent -> probe finish
>> CPU1: platform driver2 probe -> wait for lock on
>> parent
>> -> lock parent -> do probe -> unlock parent -> probe finish
>>
>> With the patch, the boot graph is like this:
>> CPU0: platform driver1 probe -> do probe staff -> probe finish
>> CPU1: platform drvier2 probe -> do probe staff -> probe finish
>
> No, I mean the boot graph in pretty .svg format that the kernel can
> output, with times and processes and everything. Look in the tools
> directory for more information, it will give you the exact timing for
> your change before and after and show you exactly where you are taking
> long periods of time.
>
> You did use that, or something else to measure this somehow, right?
>
The boot graph is in the attachment. The function msm_sharedmem_init
took
long time because it is blocked by another async probe driver. After
applying the patch, msm_sharedmem_init is no longer blocked.
>> > Why is the platform bus so "special" to warrant this? Should we perhaps
>> > make this
>> > an option for any bus to enable/disable?
>> The lock on parent was first introduced by USB guys in following
>> commit
>> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit/drivers/base/dd.c?id=bf74ad5bc41727d5f2f1c6bedb2c1fac394de731
>> This may be useful for real bus devices such as USB and they think
>> overhead of acquiring a lock is not large.
>> But since platfrom bus is virtual, the lock is not necessary.
>> Removing it
>> for platform devices will make
>> driver running in parallel and benefit boot time.
>
> I know all about USB here :)
>
> You did not answer my questions :(
>
Do you suggest that we add some varible like "async_probe" in struct
bus_type and
then check the varible during probe to decide whether to lock the
parent?
Best Regards
Wei
Download attachment "boot_before_patch.svg" of type "image/svg+xml" (35036 bytes)
Download attachment "boot_after_patch.svg" of type "image/svg+xml" (36882 bytes)
Powered by blists - more mailing lists