lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20130314150709.056aa4de5fadf3a5e94103d4@linux-foundation.org>
Date:	Thu, 14 Mar 2013 15:07:09 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Phillip Susi <psusi@...ntu.com>
Cc:	axboe@...nel.dk, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] loop: cleanup partitions when detaching loop device

On Sun,  3 Mar 2013 13:49:13 -0500 Phillip Susi <psusi@...ntu.com> wrote:

> Any partitions added by user space to the loop device were being
> left in place after detaching the loop device.  This was because
> the detach path issued a BLKRRPART to clean up partitions if
> LO_FLAGS_PARTSCAN was set, meaning that the partitions were auto
> scanned on attach.  Replace this BLKRRPART with code that
> unconditionally cleans up partitions on detach instead.

huh.  What is the user-visible effect of this bug?  Just a memory leak
or something more serious?

If "something more serious", why did this problem remain hidden for so
long?

> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -1039,12 +1039,24 @@ static int loop_clr_fd(struct loop_device *lo)
>  	lo->lo_state = Lo_unbound;
>  	/* This is safe: open() is still holding a reference. */
>  	module_put(THIS_MODULE);
> -	if (lo->lo_flags & LO_FLAGS_PARTSCAN && bdev)
> -		ioctl_by_bdev(bdev, BLKRRPART, 0);
>  	lo->lo_flags = 0;
>  	if (!part_shift)
>  		lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN;
>  	mutex_unlock(&lo->lo_ctl_mutex);
> +	if (bdev)
> +	{

scripts/checkpatch.pl is your friend.

Can you please suggest a code comment which we can slip in here to tell
readers what's going on and why we're doing this?

> +		struct disk_part_iter piter;
> +		struct hd_struct *part;
> +
> +		mutex_lock_nested(&bdev->bd_mutex, 1);
> +		invalidate_partition(bdev->bd_disk, 0);
> +		disk_part_iter_init(&piter, bdev->bd_disk, DISK_PITER_INCL_EMPTY);
> +		while ((part = disk_part_iter_next(&piter)))
> +			delete_partition(bdev->bd_disk, part->partno);
> +		disk_part_iter_exit(&piter);
> +		mutex_unlock(&bdev->bd_mutex);
> +	}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ