lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Feb 2009 02:33:57 +0100
From:	Kay Sievers <kay.sievers@...y.org>
To:	Rusty Russell <rusty@...tcorp.com.au>
Cc:	Andreas Robinson <andr345@...il.com>, sam@...nborg.org,
	linux-kernel@...r.kernel.org,
	Jon Masters <jonathan@...masters.org>,
	heiko.carstens@...ibm.com
Subject: Re: [RFC PATCH 0/6] module, kbuild: Faster boot with custom kernel.

On Fri, Feb 20, 2009 at 01:58, Rusty Russell <rusty@...tcorp.com.au> wrote:
> On Friday 20 February 2009 08:29:48 Kay Sievers wrote:
>> Further testing revealed, if I only comment out the stop_machine()
>> preparation, which is used in an error case, I get almost the same
>> improvement, even with the original mutex in place. Without the mutex
>> it's still a bit better, maybe it would be much better if we have more
>> CPUs, but all the long delays are gone only with removing the
>> stop_machine() preparation.
>
> Hmm, interesting.  The reason that reducing the lock coverage had this effect
> is because stop_machine_create() just bumps a refcount if someone is already
> between ...create() and ...destroy().
>
> So, now we've found the problem, let's fix it, then re-visit mutex reduction.
>
> module: don't use stop_machine on module load
>
> Kay Sievers <kay.sievers@...y.org> discovered that boot times are slowed
> by about half a second because all the stop_machine_create() calls,
> and he only probes about 40 modules (I have 125 loaded on this laptop).
>
> We only do stop_machine_create() so we can unlink the module if
> something goes wrong, but it's overkill (and buggy anyway: if
> stop_machine_create() fails we still call stop_machine_destroy()).

Sounds good. With that, no module takes more than 40 millisecs to link
now, most of them are between 3 and 8 millisecs.

Coldplug loads 39 modules, I end up having 50 loaded, but they are
after the udev coldplug settle time. The  39 modules get linked into
the kernel in 281 millisecs, which sounds pretty good.

That looks very different to the numbers without this patch and the
otherwise same setup, where we get heavy noise in the traces and many
delays of up to 200 millisecs until linking, most of them taking 30+
millisecs.

Thanks,
Kay
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ