[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151018055926.GB31909@kroah.com>
Date: Sat, 17 Oct 2015 22:59:26 -0700
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: Alexander Holler <holler@...oftware.de>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Russell King <linux@....linux.org.uk>,
Grant Likely <grant.likely@...aro.org>
Subject: Re: [PATCH 04/14] init: deps: order network interfaces by link order
On Sun, Oct 18, 2015 at 07:20:34AM +0200, Alexander Holler wrote:
> Am 18.10.2015 um 07:14 schrieb Greg Kroah-Hartman:
> >On Sun, Oct 18, 2015 at 06:59:22AM +0200, Alexander Holler wrote:
> >>Am 17.10.2015 um 21:36 schrieb Greg Kroah-Hartman:
> >>
> >>>Again, parallelizing does not solve anything, and causes more problems
> >>>_and_ makes things take longer. Try it, we have done it in the past and
> >>>proven this, it's pretty easy to test :)
> >>
> >>Just because I'm curious, may I ask how I would test that in the easy way
> >>you have in mind? I've just posted the results of my tests (the patch
> >>series) but I wonder what you do have in mind.
> >
> >Use the tool, scripts/bootgraph.pl to create a boot graph of your boot
> >sequence. That should show you the drivers, or other areas, that are
> >causing your boot to be "slow".
>
> So I've misunderstood you. I've read your paragraph as that it's easy to
> test parallelizing.
Ah, ok, if you want to parallelize everything, add some logic in the
driver core where the probe() callback is made to spin that off into a
new thread for every call, and when it's done, clean up the thread.
That's what I did many years ago to try this all out, if you dig in the
lkml archives there's probably a patch somewhere that you can base the
work off of to test it yourself.
hope this helps,
greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists