lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <18563.50428.659101.868745@notabene.brown>
Date:	Mon, 21 Jul 2008 09:06:36 +1000
From:	Neil Brown <neilb@...e.de>
To:	Arjan van de Ven <arjan@...radead.org>
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu,
	Simon Arlott <simon@...e.lp0.eu>,
	Daniel Walker <dwalker@...sta.com>,
	Rene Herman <rene.herman@...access.nl>
Subject: Re: [patch 3/4] fastboot: make the raid autodetect code wait for all
 devices to init


(wondering why I wasn't Cc:ed on this...)

On Sunday July 20, arjan@...radead.org wrote:
> 
> From: Arjan van de Ven <arjan@...ux.intel.com>
> Date: Sun, 20 Jul 2008 13:07:09 -0700
> Subject: [PATCH] fastboot: make the raid autodetect code wait for all devices to init
> 
> The raid autodetect code really needs to have all devices probed before
> it can detect raid arrays; not doing so would give rather messy situations
> where arrays would get detected as degraded while they shouldn't be etc.
> 
> This is in preparation of removing the "wait for everything to init"
> code that makes everyone pay, not just raid users.
> 
> Signed-off-by: Arjan van de Ven <arjan@...ux.intel.com>
> ---
>  init/do_mounts_md.c |    7 +++++++
>  1 files changed, 7 insertions(+), 0 deletions(-)
> 
> diff --git a/init/do_mounts_md.c b/init/do_mounts_md.c
> index 693d246..c0412a9 100644
> --- a/init/do_mounts_md.c
> +++ b/init/do_mounts_md.c
> @@ -267,9 +267,16 @@ __setup("md=", md_setup);
>  void __init md_run_setup(void)
>  {
>  	create_dev("/dev/md0", MKDEV(MD_MAJOR, 0));
> +
>  	if (raid_noautodetect)
>  		printk(KERN_INFO "md: Skipping autodetection of RAID arrays. (raid=noautodetect)\n");
>  	else {
> +		/*
> +		 * Since we don't want to detect and use half a raid array, we
> +		 * need to wait for the known devices to complete their probing
> +		 */
> +		while (driver_probe_done() != 0)
> +			msleep(100);
>  		int fd = sys_open("/dev/md0", 0, 0);
>  		if (fd >= 0) {
>  			sys_ioctl(fd, RAID_AUTORUN, raid_autopart);

I must say that I think this is pretty horrible.   But then it is a
pretty horrible problem and I don't think there is a clean solution.

If md in a module, this code won't run so there will be no change.  If
md is compiled in, this code will silently slow down boot even if
there are no raid arrays to assemble.  I think the "silently" is a
problem.  I'm not looking forward to "my computer boots slower if I
compile md into the kernel" reports on linux-raid@...r.

What would you think of

    if (driver_probe_done() != 0) {
	printk("md: Waiting for all devices to be available before autodetect\n"
               "md:  If you don't boot off raid, use raid=noautodetect\n");
        do
           msleep(100);
	while (driver_probe_done() != 0);
    }

??

Also, the "driver_probe_done() != 0" bothers me.

If driver_probe_done is a boolean, it should be
	while (! drive_probe_done() ) msleep ....

and if it returns a negative error, then it should be
        while ( drive_probe_done() < 0) msleep ....

The != 0 confuses me about the expected return type.


The "real" solution here involves assembling arrays in userspace using
"mdadm --incremental" from udevd, and using write-intent-bitmaps so
that writing to an array before all the component devices are
available can be done without requiring a full resync.  There is still
a bit more code needed to make that work really smoothly.

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ