[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFyFht3njKVep+aT_m5URxRxayZyb6K2ukW6+kazHf8EKA@mail.gmail.com>
Date: Wed, 28 Sep 2011 08:22:46 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Jens Axboe <axboe@...nel.dk>
Cc: James Bottomley <James.Bottomley@...senpartnership.com>,
Hannes Reinecke <hare@...e.de>,
James Bottomley <James.Bottomley@...allels.com>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [GIT PULL] Queue free fix (was Re: [PATCH] block: Free queue
resources at blk_release_queue())
On Wed, Sep 28, 2011 at 7:14 AM, Jens Axboe <axboe@...nel.dk> wrote:
>
> /*
> - * Note: If a driver supplied the queue lock, it should not zap that lock
> - * unexpectedly as some queue cleanup components like elevator_exit() and
> - * blk_throtl_exit() need queue lock.
> + * Note: If a driver supplied the queue lock, it is disconnected
> + * by this function. The actual state of the lock doesn't matter
> + * here as the request_queue isn't accessible after this point
> + * (QUEUE_FLAG_DEAD is set) and no other requests will be queued.
> */
So quite frankly, I just don't believe in that comment.
If no more requests will be queued or completed, then the queue lock
is irrelevant and should not be changed.
More importantly, if no more requests are queued or completed after
blk_cleanup_queue(), then we wouldn't have had the bug that we clearly
had with the elevator accesses, now would we? So the comment seems to
be obviously bogus and wrong.
I pulled this, but I think the "just move the teardown" would have
been the safer option. What happens if a request completes on another
CPU just as we are changing locks, and we lock one lock and then
unlock another?!
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists