lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 Feb 2011 15:00:11 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	NeilBrown <neilb@...e.de>
Cc:	Jens Axboe <jaxboe@...ionio.com>, linux-kernel@...r.kernel.org
Subject: Re: blk_throtl_exit  taking q->queue_lock is problematic

On Wed, Feb 16, 2011 at 06:31:14PM +1100, NeilBrown wrote:
> 
> 
> Hi,
> 
>  I recently discovered that blk_throtl_exit takes ->queue_lock when a blockdev
> is finally released.
> 
> This is a problem for because by that time the queue_lock doesn't exist any
> more.  It is in a separate data structure controlled by the RAID personality
> and by the time that the block device is being destroyed the raid personality
> has shutdown and the data structure containing the lock has been freed.

Hi Neil,

I am having a look at queue allocation in md and had few queries.

I was looking at md_alloc(), where we do

mddev->queue = blk_alloc_queue(GFP_KERNEL);
blk_queue_make_request(mddev->queue, md_make_request);

call to blk_queue_make_request() will make sure queue_lock is initiliazed
to internal __queue_lock.

Then I looked at raid0_run(), which is again setting the queue lock.

mddev->queue->queue_lock = &mddev->queue->__queue_lock;

I think this is redundant now as during md_alloc() we already did it.

Similar seems to be the case for linear.c and multipath.c

Following seem to be the cases where we overide the default lock.

raid1.c, raid5.c, raid10.c

I was going through the raid1.c, and I see that q->queue_lock has
been initialized to &conf->deivce_lock. Can we do the reverse. Introduce
spinlock pointer in conf and point it at queue->queue_lock? Anyway you
mentioned that personality's data structure are freed before request
queue is cleaned up, so there should not be any lifetime issues.

Also  I was wondering how does it help sharing the lock between request
queue and some other data structures in driver.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ