lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56237061.1030006@ahsoftware.de>
Date:	Sun, 18 Oct 2015 12:11:45 +0200
From:	Alexander Holler <holler@...oftware.de>
To:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Russell King <linux@....linux.org.uk>,
	Grant Likely <grant.likely@...aro.org>
Subject: Re: [PATCH 04/14] init: deps: order network interfaces by link order

Am 18.10.2015 um 07:59 schrieb Greg Kroah-Hartman:
> On Sun, Oct 18, 2015 at 07:20:34AM +0200, Alexander Holler wrote:
>> Am 18.10.2015 um 07:14 schrieb Greg Kroah-Hartman:
>>> On Sun, Oct 18, 2015 at 06:59:22AM +0200, Alexander Holler wrote:
>>>> Am 17.10.2015 um 21:36 schrieb Greg Kroah-Hartman:
>>>>
>>>>> Again, parallelizing does not solve anything, and causes more problems
>>>>> _and_ makes things take longer.  Try it, we have done it in the past and
>>>>> proven this, it's pretty easy to test :)
>>>>
>>>> Just because I'm curious, may I ask how I would test that in the easy way
>>>> you have in mind? I've just posted the results of my tests (the patch
>>>> series) but I wonder what you do have in mind.
>>>
>>> Use the tool, scripts/bootgraph.pl to create a boot graph of your boot
>>> sequence.  That should show you the drivers, or other areas, that are
>>> causing your boot to be "slow".
>>
>> So I've misunderstood you. I've read your paragraph as that it's easy to
>> test parallelizing.
>
> Ah, ok, if you want to parallelize everything, add some logic in the
> driver core where the probe() callback is made to spin that off into a
> new thread for every call, and when it's done, clean up the thread.
> That's what I did many years ago to try this all out, if you dig in the
> lkml archives there's probably a patch somewhere that you can base the
> work off of to test it yourself.

Hmm, I don't think I will do that because that means to setup a new 
thread for every call. And it doesn't need much imagination (or 
experience) that this introduces quite some overhead.

But maybe it makes sense to try out what I'm doing in my patches, 
starting multiple threads once and then just giving them some work. Will 
keep that in mind, also I don't think I will post any patch in the next 
few years. ;)

Regards,

Alexander Holler
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ