lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1298346817-26144-2-git-send-email-vgoyal@redhat.com>
Date:	Mon, 21 Feb 2011 22:53:35 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	linux-kernel@...r.kernel.org, jaxboe@...ionio.com
Cc:	neilb@...e.de, sergey.senozhatsky@...il.com, tj@...nel.org,
	jmoyer@...hat.com, snitzer@...hat.com
Subject: [PATCH 1/3] block: Initialize ->queue_lock to internal lock at queue allocation time

o There does not seem to be a clear convention whether q->queue_lock is
  initialized or not when blk_cleanup_queue() is called. In the past
  it was not necessary but now blk_throtl_exit() takes up queue lock
  by default and needs queue lock to be available.

  In fact elevator_exit() code also has similar requirement just that
  it is less stringent in the sense that elevator_exit() is called only
  if elevator is initialized.

o Two problems have been noticed because of ambiguity about spin lock
  status.

	- If a driver calls blk_alloc_queue() and then soon calls
	  blk_cleanup_queue() almost immediately, (because some other driver
	  structure allocation failed or some other error happened) then
	  blk_throtl_exit() will run into issues as queue lock is not
	  initialized. Loop driver ran into this issue recently and I noticed
	  error paths in md driver too. Similar error paths should exist in
	  other drivers too.

	- If some driver provided external spin lock and zapped the lock
	  before blk_cleanup_queue(), then it can lead to issues.

o So this patch does two things.

	- Initialize the default queue lock at queue allocation time.
	  block throttling code is one of the users of queue lock and
	  it is initialized at the queue allocation time, so it makes
	  sense to initialize ->queue_lock also to internal lock. A
	  driver can overide that lock later. This will take care of
	  first issue where a driver does not have to worry about
	  initializing the queue lock to default before calling
	  blk_cleanup_queue()

	- Put a WARN_ON() in blk_cleanup_queue() to make sure ->queue_lock
	  is initialized and it will catch the cases if there is a bad
	  driver which provides an external queue lock but zaps it
	  unexpectedly. So this WARN_ON() will catch that and we can
	  fix the driver.

	  Recently NeilBrown noted that md is doing something similar and
	  he has queued a fix in this tree. This patch relies on that
	  patch otherwise it might WARN and then crash in blk_throtl_exit().

Signed-off-by: Vivek Goyal <vgoyal@...hat.com>
---
 block/blk-core.c     |   19 ++++++++++++++++++-
 block/blk-settings.c |    7 -------
 2 files changed, 18 insertions(+), 8 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 2f4002f..9a8e256 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -459,6 +459,14 @@ void blk_put_queue(struct request_queue *q)
 void blk_cleanup_queue(struct request_queue *q)
 {
 	/*
+	 * If a driver supplied the queue lock, it should not zap that
+	 * unexpectedly as some queue cleanup components like
+	 * elevator_exit() and blk_throtl_exit() need queue lock. Hence
+	 * warn if queue lock is NULL
+	 */
+	WARN_ON(!q->queue_lock);
+
+	/*
 	 * We know we have process context here, so we can be a little
 	 * cautious and ensure that pending block actions on this device
 	 * are done before moving on. Going into this function, we should
@@ -548,6 +556,12 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
 	mutex_init(&q->sysfs_lock);
 	spin_lock_init(&q->__queue_lock);
 
+	/*
+	 * By default initialize queue_lock to internal lock and driver can
+	 * override it later if need be.
+	 */
+	q->queue_lock = &q->__queue_lock;
+
 	return q;
 }
 EXPORT_SYMBOL(blk_alloc_queue_node);
@@ -632,7 +646,10 @@ blk_init_allocated_queue_node(struct request_queue *q, request_fn_proc *rfn,
 	q->unprep_rq_fn		= NULL;
 	q->unplug_fn		= generic_unplug_device;
 	q->queue_flags		= QUEUE_FLAG_DEFAULT;
-	q->queue_lock		= lock;
+
+	/* Override internal queue lock with supplied lock pointer */
+	if (lock)
+		q->queue_lock		= lock;
 
 	/*
 	 * This also sets hw/phys segments, boundary and size
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 36c8c1f..df649fa 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -176,13 +176,6 @@ void blk_queue_make_request(struct request_queue *q, make_request_fn *mfn)
 	blk_queue_max_hw_sectors(q, BLK_SAFE_MAX_SECTORS);
 
 	/*
-	 * If the caller didn't supply a lock, fall back to our embedded
-	 * per-queue locks
-	 */
-	if (!q->queue_lock)
-		q->queue_lock = &q->__queue_lock;
-
-	/*
 	 * by default assume old behaviour and bounce for any highmem page
 	 */
 	blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
-- 
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ