lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110217011029.GA6793@redhat.com>
Date:	Wed, 16 Feb 2011 20:10:29 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	NeilBrown <neilb@...e.de>
Cc:	Jens Axboe <jaxboe@...ionio.com>, linux-kernel@...r.kernel.org
Subject: Re: blk_throtl_exit  taking q->queue_lock is problematic

On Thu, Feb 17, 2011 at 11:35:36AM +1100, NeilBrown wrote:
> On Wed, 16 Feb 2011 10:53:05 -0500 Vivek Goyal <vgoyal@...hat.com> wrote:
> 
> > On Wed, Feb 16, 2011 at 06:31:14PM +1100, NeilBrown wrote:
> > > 
> > > 
> > > Hi,
> > > 
> > >  I recently discovered that blk_throtl_exit takes ->queue_lock when a blockdev
> > > is finally released.
> > > 
> > > This is a problem for because by that time the queue_lock doesn't exist any
> > > more.  It is in a separate data structure controlled by the RAID personality
> > > and by the time that the block device is being destroyed the raid personality
> > > has shutdown and the data structure containing the lock has been freed.
> > > 
> > > This has not been a problem before.  Nothing else takes queue_lock after
> > > blk_cleanup_queue.
> > 
> > I agree that this is a problem. blk_throtl_exit() needs queue lock to
> > avoid other races with cgroup code and for avoiding races for its
> > lists etc.
> > 
> > > 
> > > I could of course set queue_lock to point to __queue_lock and initialise that,
> > > but it seems untidy and probably violates some locking requirements.
> > > 
> > > Is there some way you could use some other lock - maybe a global lock, or
> > > maybe used __queue_lock directly ???
> > 
> > Initially I had put blk_throtl_exit() in blk_cleanup_queue() where it is
> > known that ->queue_lock is still around. Due to a bug, Jens moved it
> > to blk_release_queue(). I still think that blk_cleanup_queue() is a better
> > place to call blk_throtl_exit().
> 
> Why do you say that it is known that ->queue_lock is still around in
> blk_cleanup_queue?  In md it isn't. :-(
> Is there some (other) reason that it needs to be?

I think this is only true for devices having an elevator because
elevator_exit() will call cfq_exit_queue() and take queue lock. So request
based multipath devices should have it initialized till now.

But yes, for devices not running an elevator, there does not seem to be
any other component requiring queue lock to be still there.

Like elevator, throttling logic has data structures which need to be
cleaned up if driver decides to cleanup the queue. What is that point
till when we can use the queue->lock safely? If driver is providing
the spinlock embedded in a structue, then it would make logical sense
to call back the queue at some point of time and say that spinlock
is going away and cleanup any dependencies. I thought blk_cleanup_queue()
will be that call but looks like it is not true for all cases.

So is it possible to keep the spinlock intact when md is calling up
blk_cleanup_queue()?

Thanks
Vivek 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ