lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140115125740.160e8998@notabene.brown>
Date:	Wed, 15 Jan 2014 12:57:40 +1100
From:	NeilBrown <neilb@...e.de>
To:	Nicolas Schichan <nschichan@...ebox.fr>
Cc:	LKML <linux-kernel@...r.kernel.org>, linux-raid@...r.kernel.org
Subject: Re: livelock during MD device open

On Tue, 14 Jan 2014 18:14:46 +0100 Nicolas Schichan <nschichan@...ebox.fr>
wrote:

> 
> 
> Hi,
> 
> I have recently been trying to find the cause a livelock occurring during MD 
> device open.
> 
> The livelock happens when a process tries to open an MD device for the
> first time and another opens the same MD device and sends an invalid
> ioctl:
> 
> Process 1				Process 2
> ---------				---------
> 
> md_alloc()
>    mddev_find()
>    -> returns a new mddev with
>       hold_active == UNTIL_IOCTL
>    add_disk()
>    -> sends KOBJ_ADD uevent
> 
> 					(sees KOBJ_ADD uevent for device)
> 					md_open()
> 					md_ioctl(INVALID_IOCTL)
> 					-> returns ENODEV and clears
> 					   mddev->hold_active
> 					md_release()
> 					  md_put()
> 					  -> deletes the mddev as
> 					     hold_active is 0
> 
> md_open()
>    mddev_find()
>    -> returns a newly
>      allocated mddev with
>      mddev->gendisk == NULL
> -> returns with ERESTARTSYS
>     (kernel restarts the open syscall)
> 
> 
> As to how to fix this, I see two possibilities:
> 
> - don't set hold_active to 0 if err is -ENODEV in the abort_unlock
>    path in md_ioctl().
> 
> - check cmd parameter early in md_ioctl() and return -ENOTTY if the
>    cmd parameter is not a valid MD ioctl.
> 
> Please advise on the preferred way to fix this, I'll be glad to send a
> patch for whatever is the preferred solution.
> 
> I have also a simple C program that I can send should you want to reproduce 
> the issue.
> 
> Regards,
> 

That's a very small race you are consistently losing - if I understand
correctly.

In __blkdev_get:

 restart:

	ret = -ENXIO;
	disk = get_gendisk(bdev->bd_dev, &partno);
	if (!disk)
		goto out;
	owner = disk->fops->owner;

	disk_block_events(disk);
	mutex_lock_nested(&bdev->bd_mutex, for_part);


The "get_gendisk" calls into md_alloc (via md_probe) and then add_disk(),
which generates a uevent which is handled by udev.
And before the above code gets to the mutex_lock_nexted(), the process run by
udev must have opened the device (executing all that code above and more) and
issued the ioctl.

I guess it is possible, but happening every time to produce a live-lock
suggests that the scheduler must be encouraging that behaviour.  Presumably
this is a virtual machine with just one CPU ??

I suppose the best fix is, as you suggest, to avoid clearing hold_active for
invalid ioctls.  It feels a bit like papering over a bug, but I think the
only way to really fix it is to add extra locking to the above code sequence
and I don't want to do that.

Of your two suggestions I much prefer the second.  It will be more code, but
it is also more obviously correct.  The current code is rather messy with
respect to invalid ioctl commands.

I would be happy to accept a patch which aborted md_ioctl if the cmd wasn't
one of those known to md.

Thanks,
NeilBrown



Download attachment "signature.asc" of type "application/pgp-signature" (829 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ